Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ GGML files are for CPU inference using [llama.cpp](https://github.com/ggerganov/
|
|
17 |
|
18 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GPTQ)
|
19 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GGML)
|
20 |
-
* [Original unquantised fp16 model in HF format](https://huggingface.co/
|
21 |
|
22 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
23 |
|
@@ -55,4 +55,6 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
|
|
55 |
|
56 |
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
|
57 |
|
58 |
-
# Original model card: Tim Dettmers' Guanaco 7B
|
|
|
|
|
|
17 |
|
18 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GPTQ)
|
19 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GGML)
|
20 |
+
* [Original unquantised fp16 model in HF format](https://huggingface.co/TheBloke/guanaco-7B-HF)
|
21 |
|
22 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
23 |
|
|
|
55 |
|
56 |
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
|
57 |
|
58 |
+
# Original model card: Tim Dettmers' Guanaco 7B
|
59 |
+
|
60 |
+
No model card provided by model creator.
|