Update README.md
Browse files
README.md
CHANGED
@@ -1,45 +1,137 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
- mergekit
|
6 |
- merge
|
7 |
-
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
*
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
merge_method: slerp
|
32 |
-
parameters:
|
33 |
-
t:
|
34 |
-
- filter: self_attn
|
35 |
-
value: [0.0, 0.5, 0.3, 0.7, 1.0]
|
36 |
-
- filter: mlp
|
37 |
-
value: [1.0, 0.5, 0.7, 0.3, 0.0]
|
38 |
-
- value: 0.5
|
39 |
-
slices:
|
40 |
-
- sources:
|
41 |
-
- layer_range: [0, 80]
|
42 |
-
model: /Volumes/external/VAGOsolutions_Llama-3-SauerkrautLM-70b-Instruct
|
43 |
-
- layer_range: [0, 80]
|
44 |
-
model: /Volumes/external2/models/Drake-Llama-3.1-70B
|
45 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: llama3.1
|
3 |
+
language:
|
4 |
+
- en
|
5 |
library_name: transformers
|
6 |
tags:
|
7 |
- mergekit
|
8 |
- merge
|
9 |
+
base_model:
|
10 |
+
- meta-llama/Meta-Llama-3.1-70B-Instruct
|
11 |
+
- NousResearch/Hermes-3-Llama-3.1-70B
|
12 |
+
- abacusai/Dracarys-Llama-3.1-70B-Instruct
|
13 |
+
- VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct
|
14 |
---
|
15 |
+
|
16 |
+
|
17 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/649dc85249ae3a68334adcc6/yDDOz1fsWfSviCGtCh3f3.png)
|
18 |
+
**Brinebreath-Llama-3.1-70B**
|
19 |
+
=====================================
|
20 |
+
|
21 |
+
I made this since I started having some problems with Cathallama. This seems to behave well.
|
22 |
+
|
23 |
+
**Notable Performance**
|
24 |
+
|
25 |
+
* 7% overall success rate increase on MMLU-PRO over LLaMA 3.1 70b at Q4_0
|
26 |
+
* Strong performance in MMLU-PRO categories overall
|
27 |
+
* Great performance during manual testing
|
28 |
+
|
29 |
+
**Creation workflow**
|
30 |
+
=====================
|
31 |
+
**Models merged**
|
32 |
+
* meta-llama/Meta-Llama-3.1-70B-Instruct
|
33 |
+
* NousResearch/Hermes-3-Llama-3.1-70B
|
34 |
+
* abacusai/Dracarys-Llama-3.1-70B-Instruct
|
35 |
+
* VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct
|
36 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
```
|
38 |
+
flowchart TD
|
39 |
+
A[Hermes 3] -->|Merge with| B[Meta-Llama-3.1]
|
40 |
+
C[Dracarys] -->|Merge with| D[Meta-Llama-3.1]
|
41 |
+
B -->| | E[Merge]
|
42 |
+
D -->| | E[Merge]
|
43 |
+
G[SauerkrautLM] -->|Merge with| E[Merge]
|
44 |
+
E[Merge] -->| | F[Brinebreath]
|
45 |
+
```
|
46 |
+
|
47 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/649dc85249ae3a68334adcc6/3cjOUfghMD2GvxL7a3SOh.png)
|
48 |
+
|
49 |
+
**Testing**
|
50 |
+
=====================
|
51 |
+
|
52 |
+
**Hyperparameters**
|
53 |
+
---------------
|
54 |
+
|
55 |
+
* **Temperature**: 0.0 for automated, 0.9 for manual
|
56 |
+
* **Penalize repeat sequence**: 1.05
|
57 |
+
* **Consider N tokens for penalize**: 256
|
58 |
+
* **Penalize repetition of newlines**
|
59 |
+
* **Top-K sampling**: 40
|
60 |
+
* **Top-P sampling**: 0.95
|
61 |
+
* **Min-P sampling**: 0.05
|
62 |
+
|
63 |
+
**LLaMAcpp Version**
|
64 |
+
------------------
|
65 |
+
|
66 |
+
* b3527-2-g2d5dd7bb
|
67 |
+
* -fa -ngl -1 -ctk f16 --no-mmap
|
68 |
+
|
69 |
+
**Tested Files**
|
70 |
+
------------------
|
71 |
+
|
72 |
+
* Brinebreath-Llama-3.1-70B.Q4_0.gguf
|
73 |
+
* Meta-Llama-3.1-70B-Instruct.Q4_0.gguf
|
74 |
+
|
75 |
+
|
76 |
+
**Manual testing**
|
77 |
+
|
78 |
+
| Category | Test Case | Brinebreath-Llama-3.1-70B.Q4_0.gguf | Meta-Llama-3.1-70B-Instruct.Q4_0.gguf |
|
79 |
+
| --- | --- | --- | --- |
|
80 |
+
| **Common Sense** | Ball on cup | OK | OK |
|
81 |
+
| | Big duck small horse | OK | OK |
|
82 |
+
| | Killers | OK | OK |
|
83 |
+
| | Strawberry r's | <span style="color: red;">KO</span> | <span style="color: red;">KO</span> |
|
84 |
+
| | 9.11 or 9.9 bigger | <span style="color: red;">KO</span> | <span style="color: red;">KO</span> |
|
85 |
+
| | Dragon or lens | <span style="color: red;">KO</span> | <span style="color: red;">KO</span> |
|
86 |
+
| | Shirts | OK | <span style="color: red;">KO</span> |
|
87 |
+
| | Sisters | OK | <span style="color: red;">KO</span> |
|
88 |
+
| | Jane faster | OK | OK |
|
89 |
+
| **Programming** | JSON | OK | OK |
|
90 |
+
| | Python snake game | OK | <span style="color: red;">KO</span> |
|
91 |
+
| **Math** | Door window combination | OK | <span style="color: red;">KO</span> |
|
92 |
+
| **Smoke** | Poem | OK | OK |
|
93 |
+
| | Story | OK | OK |
|
94 |
+
|
95 |
+
*Note: See [sample_generations.txt](https://huggingface.co/gbueno86/Brinebreath-Llama-3.1-70B/blob/main/sample_generations.txt) on the main folder of the repo for the raw generations.*
|
96 |
+
|
97 |
+
**MMLU-PRO**
|
98 |
+
|
99 |
+
| Model | Success % |
|
100 |
+
| --- | --- |
|
101 |
+
| Brinebreath-3.1-70B.Q4_0.gguf | **49.0%** |
|
102 |
+
| Meta-Llama-3.1-70B-Instruct.Q4_0.gguf | 42.0% |
|
103 |
+
|
104 |
+
|
105 |
+
| MMLU-PRO category| Brinebreath-3.1-70B.Q4_0.gguf | Meta-Llama-3.1-70B-Instruct.Q4_0.gguf |
|
106 |
+
| --- | --- | --- |
|
107 |
+
| Business | **45.0%** | 40.0% |
|
108 |
+
| Law | **40.0%** | 35.0% |
|
109 |
+
| Psychology | **85.0%** | 80.0% |
|
110 |
+
| Biology | **80.0%** | 75.0% |
|
111 |
+
| Chemistry | **50.0%** | 45.0% |
|
112 |
+
| History | **65.0%** | 60.0% |
|
113 |
+
| Other | **55.0%** | 50.0% |
|
114 |
+
| Health | **70.0%** | 65.0% |
|
115 |
+
| Economics | **80.0%** | 75.0% |
|
116 |
+
| Math | **35.0%** | 30.0% |
|
117 |
+
| Physics | **45.0%** | 40.0% |
|
118 |
+
| Computer Science | **60.0%** | 55.0% |
|
119 |
+
| Philosophy | **50.0%** | 45.0% |
|
120 |
+
| Engineering | **45.0%** | 40.0% |
|
121 |
+
|
122 |
+
Note: MMLU-PRO Overall tested with 100 questions. Categories testes with 20 questions from each category.
|
123 |
+
|
124 |
+
**PubmedQA**
|
125 |
+
|
126 |
+
Model Name | Success% |
|
127 |
+
| --- | --- |
|
128 |
+
| Brinebreath-3.1-70B.Q4_0.gguf| **71.00%** |
|
129 |
+
| Meta-Llama-3.1-70B-Instruct.Q4_0.gguf | 68.00% |
|
130 |
+
|
131 |
+
|
132 |
+
Note: PubmedQA tested with 100 questions.
|
133 |
+
|
134 |
+
|
135 |
+
**Request**
|
136 |
+
--------------
|
137 |
+
If you are hiring in the EU or can sponsor a visa, PM me :D
|