RichardErkhov commited on
Commit
a5ec17e
1 Parent(s): 058f51f

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +213 -0
README.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ DareBeagel-2x7B - GGUF
11
+ - Model creator: https://huggingface.co/shadowml/
12
+ - Original model: https://huggingface.co/shadowml/DareBeagel-2x7B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [DareBeagel-2x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q2_K.gguf) | Q2_K | 4.43GB |
18
+ | [DareBeagel-2x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.IQ3_XS.gguf) | IQ3_XS | 4.94GB |
19
+ | [DareBeagel-2x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.IQ3_S.gguf) | IQ3_S | 5.22GB |
20
+ | [DareBeagel-2x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q3_K_S.gguf) | Q3_K_S | 5.2GB |
21
+ | [DareBeagel-2x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.IQ3_M.gguf) | IQ3_M | 3.28GB |
22
+ | [DareBeagel-2x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q3_K.gguf) | Q3_K | 5.78GB |
23
+ | [DareBeagel-2x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q3_K_M.gguf) | Q3_K_M | 5.78GB |
24
+ | [DareBeagel-2x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q3_K_L.gguf) | Q3_K_L | 6.27GB |
25
+ | [DareBeagel-2x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.IQ4_XS.gguf) | IQ4_XS | 2.32GB |
26
+ | [DareBeagel-2x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q4_0.gguf) | Q4_0 | 6.78GB |
27
+ | [DareBeagel-2x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.IQ4_NL.gguf) | IQ4_NL | 6.85GB |
28
+ | [DareBeagel-2x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q4_K_S.gguf) | Q4_K_S | 6.84GB |
29
+ | [DareBeagel-2x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q4_K.gguf) | Q4_K | 7.25GB |
30
+ | [DareBeagel-2x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q4_K_M.gguf) | Q4_K_M | 7.25GB |
31
+ | [DareBeagel-2x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q4_1.gguf) | Q4_1 | 7.52GB |
32
+ | [DareBeagel-2x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q5_0.gguf) | Q5_0 | 8.26GB |
33
+ | [DareBeagel-2x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q5_K_S.gguf) | Q5_K_S | 8.26GB |
34
+ | [DareBeagel-2x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q5_K.gguf) | Q5_K | 8.51GB |
35
+ | [DareBeagel-2x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q5_K_M.gguf) | Q5_K_M | 8.51GB |
36
+ | [DareBeagel-2x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q5_1.gguf) | Q5_1 | 9.01GB |
37
+ | [DareBeagel-2x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q6_K.gguf) | Q6_K | 9.84GB |
38
+ | [DareBeagel-2x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_DareBeagel-2x7B-gguf/blob/main/DareBeagel-2x7B.Q8_0.gguf) | Q8_0 | 12.75GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ tags:
47
+ - moe
48
+ - merge
49
+ - mergekit
50
+ - lazymergekit
51
+ - mlabonne/NeuralBeagle14-7B
52
+ - mlabonne/NeuralDaredevil-7B
53
+ model-index:
54
+ - name: DareBeagel-2x7B
55
+ results:
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: AI2 Reasoning Challenge (25-Shot)
61
+ type: ai2_arc
62
+ config: ARC-Challenge
63
+ split: test
64
+ args:
65
+ num_few_shot: 25
66
+ metrics:
67
+ - type: acc_norm
68
+ value: 72.01
69
+ name: normalized accuracy
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: HellaSwag (10-Shot)
78
+ type: hellaswag
79
+ split: validation
80
+ args:
81
+ num_few_shot: 10
82
+ metrics:
83
+ - type: acc_norm
84
+ value: 88.12
85
+ name: normalized accuracy
86
+ source:
87
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: MMLU (5-Shot)
94
+ type: cais/mmlu
95
+ config: all
96
+ split: test
97
+ args:
98
+ num_few_shot: 5
99
+ metrics:
100
+ - type: acc
101
+ value: 64.51
102
+ name: accuracy
103
+ source:
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: TruthfulQA (0-shot)
111
+ type: truthful_qa
112
+ config: multiple_choice
113
+ split: validation
114
+ args:
115
+ num_few_shot: 0
116
+ metrics:
117
+ - type: mc2
118
+ value: 69.09
119
+ source:
120
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
121
+ name: Open LLM Leaderboard
122
+ - task:
123
+ type: text-generation
124
+ name: Text Generation
125
+ dataset:
126
+ name: Winogrande (5-shot)
127
+ type: winogrande
128
+ config: winogrande_xl
129
+ split: validation
130
+ args:
131
+ num_few_shot: 5
132
+ metrics:
133
+ - type: acc
134
+ value: 82.72
135
+ name: accuracy
136
+ source:
137
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
138
+ name: Open LLM Leaderboard
139
+ - task:
140
+ type: text-generation
141
+ name: Text Generation
142
+ dataset:
143
+ name: GSM8k (5-shot)
144
+ type: gsm8k
145
+ config: main
146
+ split: test
147
+ args:
148
+ num_few_shot: 5
149
+ metrics:
150
+ - type: acc
151
+ value: 70.51
152
+ name: accuracy
153
+ source:
154
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagel-2x7B
155
+ name: Open LLM Leaderboard
156
+ ---
157
+
158
+ # Beyonder-2x7B-v2
159
+
160
+ Beyonder-2x7B-v2 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
161
+ * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
162
+ * [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
163
+
164
+ ## 🧩 Configuration
165
+
166
+ ```yaml
167
+ base_model: mlabonne/NeuralBeagle14-7B
168
+ gate_mode: random
169
+ experts:
170
+ - source_model: mlabonne/NeuralBeagle14-7B
171
+ positive_prompts: [""]
172
+ - source_model: mlabonne/NeuralDaredevil-7B
173
+ positive_prompts: [""]
174
+ ```
175
+
176
+ ## 💻 Usage
177
+
178
+ ```python
179
+ !pip install -qU transformers bitsandbytes accelerate
180
+
181
+ from transformers import AutoTokenizer
182
+ import transformers
183
+ import torch
184
+
185
+ model = "shadowml/Beyonder-2x7B-v2"
186
+
187
+ tokenizer = AutoTokenizer.from_pretrained(model)
188
+ pipeline = transformers.pipeline(
189
+ "text-generation",
190
+ model=model,
191
+ model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
192
+ )
193
+
194
+ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
195
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
196
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
197
+ print(outputs[0]["generated_text"])
198
+ ```
199
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
200
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__DareBeagel-2x7B)
201
+
202
+ | Metric |Value|
203
+ |---------------------------------|----:|
204
+ |Avg. |74.49|
205
+ |AI2 Reasoning Challenge (25-Shot)|72.01|
206
+ |HellaSwag (10-Shot) |88.12|
207
+ |MMLU (5-Shot) |64.51|
208
+ |TruthfulQA (0-shot) |69.09|
209
+ |Winogrande (5-shot) |82.72|
210
+ |GSM8k (5-shot) |70.51|
211
+
212
+
213
+