zackli4ai commited on
Commit
07ad002
1 Parent(s): cc3b071

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -81
README.md CHANGED
@@ -1,82 +1,82 @@
1
- ---
2
- language:
3
- - en
4
- license: apache-2.0
5
- model_name: Octopus-V4-GGUF
6
- base_model: NexaAIDev/Octopus-v4
7
- inference: false
8
- model_creator: NexaAIDev
9
- quantized_by: Nexa AI, Inc.
10
- tags:
11
- - function calling
12
- - on-device language model
13
- - gguf
14
- - llama cpp
15
- ---
16
-
17
- # Octopus V4-GGUF: Graph of language models
18
-
19
-
20
- <p align="center">
21
- - <a href="https://huggingface.co/NexaAIDev/Octopus-v4" target="_blank">Original Model</a>
22
- - <a href="https://www.nexa4ai.com/" target="_blank">Nexa AI Website</a>
23
- - <a href="https://github.com/NexaAI/octopus-v4" target="_blank">Octopus-v4 Github</a>
24
- - <a href="https://arxiv.org/abs/2404.19296" target="_blank">ArXiv</a>
25
- - <a href="https://huggingface.co/spaces/NexaAIDev/domain_llm_leaderboard" target="_blank">Domain LLM Leaderbaord</a>
26
- </p>
27
-
28
- <p align="center" width="100%">
29
- <a><img src="octopus-v4-logo.png" alt="nexa-octopus" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
30
- </p>
31
-
32
- **Acknowledgement**:
33
- We sincerely thank our community members, [ThunderBeee](https://huggingface.co/ThunderBeee) and [ZY6](https://huggingface.co/ZY6), for their extraordinary contributions to this quantization effort. Please explore [Octopus-v4](https://huggingface.co/NexaAIDev/Octopus-v4) for our original huggingface model.
34
-
35
-
36
- ## Run with [Ollama](https://github.com/ollama/ollama)
37
-
38
- ```bash
39
- ollama run NexaAIDev/octopus-v4-q4_k_m
40
- ```
41
-
42
- Input example:
43
-
44
- ```json
45
- Query: Tell me the result of derivative of x^3 when x is 2?
46
-
47
- Response: <nexa_4> ('Determine the derivative of the function f(x) = x^3 at the point where x equals 2, and interpret the result within the context of rate of change and tangent slope.')<nexa_end>
48
-
49
- ```
50
- Note that `<nexa_4>` represents the math gpt.
51
-
52
- ### Dataset and Benchmark
53
-
54
- * Utilized questions from [MMLU](https://github.com/hendrycks/test) to evaluate the performances.
55
- * Evaluated with the Ollama [llm-benchmark](https://github.com/MinhNgyuen/llm-benchmark) method.
56
-
57
-
58
- ## Quantized GGUF Models
59
-
60
- | Name | Quant method | Bits | Size | Respons (token/second) | Use Cases |
61
- | ---------------------- | ------------ | ---- | ------- | ---------------------- | ----------------------------------------- |
62
- | Octopus-v4.gguf | | | 7.20 GB | 27.64 | extremely large |
63
- | Octopus-v4-Q2_K.gguf | Q2_K | 2 | 1.32 GB | 54.20 | extremely not recommended, high loss |
64
- | Octopus-v4-Q3_K.gguf | Q3_K | 3 | 1.82 GB | 51.22 | not recommended |
65
- | Octopus-v4-Q3_K_S.gguf | Q3_K_S | 3 | 1.57 GB | 51.78 | not very recommended |
66
- | Octopus-v4-Q3_K_M.gguf | Q3_K_M | 3 | 1.82 GB | 50.86 | not very recommended |
67
- | Octopus-v4-Q3_K_L.gguf | Q3_K_L | 3 | 1.94 GB | 50.05 | not very recommended |
68
- | Octopus-v4-Q4_0.gguf | Q4_0 | 4 | 2.03 GB | 65.76 | good quality, recommended |
69
- | Octopus-v4-Q4_1.gguf | Q4_1 | 4 | 2.24 GB | 69.01 | slow, good quality, recommended |
70
- | Octopus-v4-Q4_K.gguf | Q4_K | 4 | 2.23 GB | 55.76 | slow, good quality, recommended |
71
- | Octopus-v4-Q4_K_S.gguf | Q4_K_S | 4 | 2.04 GB | 53.98 | high quality, recommended |
72
- | Octopus-v4-Q4_K_M.gguf | Q4_K_M | 4 | 1.51 GB | 58.39 | some functions loss, not very recommended |
73
- | Octopus-v4-Q5_0.gguf | Q5_0 | 5 | 2.45 GB | 61.98 | slow, good quality |
74
- | Octopus-v4-Q5_1.gguf | Q5_1 | 5 | 2.67 GB | 63.44 | slow, good quality |
75
- | Octopus-v4-Q5_K.gguf | Q5_K | 5 | 2.58 GB | 58.28 | moderate speed, recommended |
76
- | Octopus-v4-Q5_K_S.gguf | Q5_K_S | 5 | 2.45 GB | 59.95 | moderate speed, recommended |
77
- | Octopus-v4-Q5_K_M.gguf | Q5_K_M | 5 | 2.62 GB | 53.31 | fast, good quality, recommended |
78
- | Octopus-v4-Q6_K.gguf | Q6_K | 6 | 2.91 GB | 52.15 | large, not very recommended |
79
- | Octopus-v4-Q8_0.gguf | Q8_0 | 8 | 3.78 GB | 50.10 | very large, good quality |
80
- | Octopus-v4-f16.gguf | f16 | 16 | 7.20 GB | 30.61 | extremely large |
81
-
82
  _Quantized with llama.cpp_
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ model_name: Octopus-V4-GGUF
6
+ base_model: NexaAIDev/Octopus-v4
7
+ inference: false
8
+ model_creator: NexaAIDev
9
+ quantized_by: Nexa AI, Inc.
10
+ tags:
11
+ - function calling
12
+ - on-device language model
13
+ - gguf
14
+ - llama cpp
15
+ ---
16
+
17
+ # Octopus V4-GGUF: Graph of language models
18
+
19
+
20
+ <p align="center">
21
+ - <a href="https://huggingface.co/NexaAIDev/Octopus-v4" target="_blank">Original Model</a>
22
+ - <a href="https://www.nexa4ai.com/" target="_blank">Nexa AI Website</a>
23
+ - <a href="https://github.com/NexaAI/octopus-v4" target="_blank">Octopus-v4 Github</a>
24
+ - <a href="https://arxiv.org/abs/2404.19296" target="_blank">ArXiv</a>
25
+ - <a href="https://huggingface.co/spaces/NexaAIDev/domain_llm_leaderboard" target="_blank">Domain LLM Leaderbaord</a>
26
+ </p>
27
+
28
+ <p align="center" width="100%">
29
+ <a><img src="octopus-v4-logo.png" alt="nexa-octopus" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
30
+ </p>
31
+
32
+ **Acknowledgement**:
33
+ We sincerely thank our community members, [Mingyuan](https://huggingface.co/ThunderBeee) and [Zoey](https://huggingface.co/ZY6), for their extraordinary contributions to this quantization effort. Please explore [Octopus-v4](https://huggingface.co/NexaAIDev/Octopus-v4) for our original huggingface model.
34
+
35
+
36
+ ## Run with [Ollama](https://github.com/ollama/ollama)
37
+
38
+ ```bash
39
+ ollama run NexaAIDev/octopus-v4-q4_k_m
40
+ ```
41
+
42
+ Input example:
43
+
44
+ ```json
45
+ Query: Tell me the result of derivative of x^3 when x is 2?
46
+
47
+ Response: <nexa_4> ('Determine the derivative of the function f(x) = x^3 at the point where x equals 2, and interpret the result within the context of rate of change and tangent slope.')<nexa_end>
48
+
49
+ ```
50
+ Note that `<nexa_4>` represents the math gpt.
51
+
52
+ ### Dataset and Benchmark
53
+
54
+ * Utilized questions from [MMLU](https://github.com/hendrycks/test) to evaluate the performances.
55
+ * Evaluated with the Ollama [llm-benchmark](https://github.com/MinhNgyuen/llm-benchmark) method.
56
+
57
+
58
+ ## Quantized GGUF Models
59
+
60
+ | Name | Quant method | Bits | Size | Respons (token/second) | Use Cases |
61
+ | ---------------------- | ------------ | ---- | ------- | ---------------------- | ----------------------------------------- |
62
+ | Octopus-v4.gguf | | | 7.20 GB | 27.64 | extremely large |
63
+ | Octopus-v4-Q2_K.gguf | Q2_K | 2 | 1.32 GB | 54.20 | extremely not recommended, high loss |
64
+ | Octopus-v4-Q3_K.gguf | Q3_K | 3 | 1.82 GB | 51.22 | not recommended |
65
+ | Octopus-v4-Q3_K_S.gguf | Q3_K_S | 3 | 1.57 GB | 51.78 | not very recommended |
66
+ | Octopus-v4-Q3_K_M.gguf | Q3_K_M | 3 | 1.82 GB | 50.86 | not very recommended |
67
+ | Octopus-v4-Q3_K_L.gguf | Q3_K_L | 3 | 1.94 GB | 50.05 | not very recommended |
68
+ | Octopus-v4-Q4_0.gguf | Q4_0 | 4 | 2.03 GB | 65.76 | good quality, recommended |
69
+ | Octopus-v4-Q4_1.gguf | Q4_1 | 4 | 2.24 GB | 69.01 | slow, good quality, recommended |
70
+ | Octopus-v4-Q4_K.gguf | Q4_K | 4 | 2.23 GB | 55.76 | slow, good quality, recommended |
71
+ | Octopus-v4-Q4_K_S.gguf | Q4_K_S | 4 | 2.04 GB | 53.98 | high quality, recommended |
72
+ | Octopus-v4-Q4_K_M.gguf | Q4_K_M | 4 | 1.51 GB | 58.39 | some functions loss, not very recommended |
73
+ | Octopus-v4-Q5_0.gguf | Q5_0 | 5 | 2.45 GB | 61.98 | slow, good quality |
74
+ | Octopus-v4-Q5_1.gguf | Q5_1 | 5 | 2.67 GB | 63.44 | slow, good quality |
75
+ | Octopus-v4-Q5_K.gguf | Q5_K | 5 | 2.58 GB | 58.28 | moderate speed, recommended |
76
+ | Octopus-v4-Q5_K_S.gguf | Q5_K_S | 5 | 2.45 GB | 59.95 | moderate speed, recommended |
77
+ | Octopus-v4-Q5_K_M.gguf | Q5_K_M | 5 | 2.62 GB | 53.31 | fast, good quality, recommended |
78
+ | Octopus-v4-Q6_K.gguf | Q6_K | 6 | 2.91 GB | 52.15 | large, not very recommended |
79
+ | Octopus-v4-Q8_0.gguf | Q8_0 | 8 | 3.78 GB | 50.10 | very large, good quality |
80
+ | Octopus-v4-f16.gguf | f16 | 16 | 7.20 GB | 30.61 | extremely large |
81
+
82
  _Quantized with llama.cpp_