TheBloke commited on
Commit
feeb1ee
1 Parent(s): d2aa4c2

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -55,14 +55,14 @@ Each separate quant is in a different branch. See below for instructions on fet
55
 
56
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
57
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
58
- | main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
59
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
60
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
61
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
62
- | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
63
- | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
64
- | gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
65
- | gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
66
 
67
  ## How to download from branches
68
 
 
55
 
56
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
57
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
58
+ | [main](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/main) | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
59
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
60
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
61
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
62
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
63
+ | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
64
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
65
+ | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
66
 
67
  ## How to download from branches
68