Upload README.md
Browse files
README.md
CHANGED
@@ -50,6 +50,7 @@ These files were quantised using hardware kindly provided by [Massed Compute](ht
|
|
50 |
|
51 |
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-AWQ)
|
52 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GPTQ)
|
|
|
53 |
* [DeepSeek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)
|
54 |
<!-- repositories-available end -->
|
55 |
|
|
|
50 |
|
51 |
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-AWQ)
|
52 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GPTQ)
|
53 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GGUF)
|
54 |
* [DeepSeek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)
|
55 |
<!-- repositories-available end -->
|
56 |
|