Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,15 @@ I have the following Vicuna 1.1 repositories available:
|
|
24 |
* [GPTQ quantized 4bit 7B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g)
|
25 |
* [GPTQ quantized 4bit 7B 1.1 for CPU - GGML format for `llama.cpp`](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g-GGML)
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Provided files
|
29 |
|
|
|
24 |
* [GPTQ quantized 4bit 7B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g)
|
25 |
* [GPTQ quantized 4bit 7B 1.1 for CPU - GGML format for `llama.cpp`](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g-GGML)
|
26 |
|
27 |
+
## GIBBERISH OUTPUT
|
28 |
+
|
29 |
+
If you get gibberish output, it is because you are using the `safetensors` file without updating GPTQ-for-LLaMA.
|
30 |
+
|
31 |
+
If you use the `safetensors` file you must have the latest version of GPTQ-for-LLaMA inside text-generation-webui.
|
32 |
+
|
33 |
+
If you don't want to update, or you can't, use the `pt` file instead.
|
34 |
+
|
35 |
+
Either way, please read the instructions below carefully.
|
36 |
|
37 |
## Provided files
|
38 |
|