New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
Browse files
README.md
CHANGED
@@ -14,7 +14,6 @@ language:
|
|
14 |
- en
|
15 |
library_name: transformers
|
16 |
pipeline_tag: text-generation
|
17 |
-
inference: false
|
18 |
---
|
19 |
|
20 |
# Manticore 13B GGML
|
@@ -25,24 +24,26 @@ This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+C
|
|
25 |
|
26 |
## Repositories available
|
27 |
|
28 |
-
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Manticore-13B-GPTQ).
|
29 |
-
* [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/Manticore-13B-GGML).
|
30 |
* [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b).
|
31 |
|
32 |
-
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May
|
33 |
|
34 |
-
llama.cpp recently made
|
35 |
|
36 |
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
|
37 |
|
|
|
|
|
38 |
## Provided files
|
39 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
40 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
41 |
-
`manticore-13B.
|
42 |
-
`manticore-13B.
|
43 |
-
`manticore-13B.
|
44 |
-
`manticore-13B.
|
45 |
-
`manticore-13B.
|
46 |
|
47 |
## How to run in `llama.cpp`
|
48 |
|
|
|
14 |
- en
|
15 |
library_name: transformers
|
16 |
pipeline_tag: text-generation
|
|
|
17 |
---
|
18 |
|
19 |
# Manticore 13B GGML
|
|
|
24 |
|
25 |
## Repositories available
|
26 |
|
27 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/TheBloke/Manticore-13B-GPTQ).
|
28 |
+
* [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/TheBloke/Manticore-13B-GGML).
|
29 |
* [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b).
|
30 |
|
31 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
32 |
|
33 |
+
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
34 |
|
35 |
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
|
36 |
|
37 |
+
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
38 |
+
|
39 |
## Provided files
|
40 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
41 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
42 |
+
`manticore-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
|
43 |
+
`manticore-13B.ggmlv3.q4_1.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
44 |
+
`manticore-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
45 |
+
`manticore-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
|
46 |
+
`manticore-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
|
47 |
|
48 |
## How to run in `llama.cpp`
|
49 |
|