Text Generation
Transformers
Safetensors
English
llama
causal-lm
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
56b2ff7
1 Parent(s): 2ec34de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -53,7 +53,6 @@ It was created without the `--act-order` parameter. It may have slightly lower i
53
  * `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
54
  * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
55
  * Works with text-generation-webui one-click-installers
56
- * Works on Windows
57
  * Parameters: Groupsize = 128g. No act-order.
58
  * Command used to create the GPTQ:
59
  ```
@@ -69,7 +68,7 @@ To access this file, please switch to the `latest` branch fo this repo and downl
69
  * `stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors`
70
  * Only works with recent GPTQ-for-LLaMa code
71
  * **Does not** work with text-generation-webui one-click-installers
72
- * Parameters: Groupsize = 128g. act-order.
73
  * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
74
  * Command used to create the GPTQ:
75
  ```
 
53
  * `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
54
  * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
55
  * Works with text-generation-webui one-click-installers
 
56
  * Parameters: Groupsize = 128g. No act-order.
57
  * Command used to create the GPTQ:
58
  ```
 
68
  * `stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors`
69
  * Only works with recent GPTQ-for-LLaMa code
70
  * **Does not** work with text-generation-webui one-click-installers
71
+ * Parameters: Groupsize = 128g. **act-order**.
72
  * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
73
  * Command used to create the GPTQ:
74
  ```