Text Generation
Transformers
Safetensors
English
llama
causal-lm
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
4c3d3ac
1 Parent(s): 1f1a8c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -29,18 +29,18 @@ This model works best with the following prompt template:
29
  Load text-generation-webui as you normally do.
30
 
31
  1. Click the **Model tab**.
32
- 2. Under **Download custom model or LoRA**, enter the repo name to download: `TheBloke/stable-vicuna-13B-GPTQ`.
33
  3. Click **Download**.
34
  4. Wait until it says it's finished downloading.
35
  5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
36
  6. Now click the **Refresh** icon next to **Model** in the top left.
37
- 7. In the **Model drop-down**: choose the model you just downloaded, eg `stable-vicuna-13B-GPTQ`.
38
  8. Click **Reload the Model** in the top right.
39
  9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
40
 
41
  ## GIBBERISH OUTPUT IN `text-generation-webui`?
42
 
43
- If you're installing manually, olease read the Provided Files section below. You should use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
44
 
45
  If you're using a text-generation-webui one click installer, you MUST use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`.
46
 
 
29
  Load text-generation-webui as you normally do.
30
 
31
  1. Click the **Model tab**.
32
+ 2. Under **Download custom model or LoRA**, enter this repo name: `TheBloke/stable-vicuna-13B-GPTQ`.
33
  3. Click **Download**.
34
  4. Wait until it says it's finished downloading.
35
  5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
36
  6. Now click the **Refresh** icon next to **Model** in the top left.
37
+ 7. In the **Model drop-down**: choose this model: `stable-vicuna-13B-GPTQ`.
38
  8. Click **Reload the Model** in the top right.
39
  9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
40
 
41
  ## GIBBERISH OUTPUT IN `text-generation-webui`?
42
 
43
+ If you're installing the model files manually, olease read the Provided Files section below. You should use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
44
 
45
  If you're using a text-generation-webui one click installer, you MUST use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`.
46