Upload README.md
Browse files
README.md
CHANGED
@@ -86,6 +86,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
|
|
86 |
<!-- repositories-available start -->
|
87 |
## Repositories available
|
88 |
|
|
|
89 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardCoder-Python-7B-V1.0-GPTQ)
|
90 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-Python-7B-V1.0-GGUF)
|
91 |
* [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardCoder-Python-7b-V1.0)
|
|
|
86 |
<!-- repositories-available start -->
|
87 |
## Repositories available
|
88 |
|
89 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardCoder-Python-7B-V1.0-AWQ)
|
90 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardCoder-Python-7B-V1.0-GPTQ)
|
91 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-Python-7B-V1.0-GGUF)
|
92 |
* [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardCoder-Python-7b-V1.0)
|