Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
29 |
|
30 |
## Other repositories available
|
31 |
|
32 |
-
* [4-bit GPTQ models for GPU inference](https://huggingface.co/
|
33 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/samantha-33B-GGML)
|
34 |
* [Original unquantised fp16 model in HF format](https://huggingface.co/ehartford/samantha-33B)
|
35 |
|
|
|
29 |
|
30 |
## Other repositories available
|
31 |
|
32 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/samantha-33B-GPTQ)
|
33 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/samantha-33B-GGML)
|
34 |
* [Original unquantised fp16 model in HF format](https://huggingface.co/ehartford/samantha-33B)
|
35 |
|