ibivibiv commited on
Commit
bec4f91
1 Parent(s): 082d761

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -194
README.md CHANGED
@@ -108,200 +108,163 @@ model-index:
108
  name: Open LLM Leaderboard
109
  ---
110
 
111
- # Model Card for Model ID
112
-
113
- <!-- Provide a quick summary of what the model is/does. -->
114
-
115
-
116
-
117
- ## Model Details
118
-
119
- ### Model Description
120
-
121
- <!-- Provide a longer summary of what this model is. -->
122
-
123
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
124
-
125
- - **Developed by:** [More Information Needed]
126
- - **Funded by [optional]:** [More Information Needed]
127
- - **Shared by [optional]:** [More Information Needed]
128
- - **Model type:** [More Information Needed]
129
- - **Language(s) (NLP):** [More Information Needed]
130
- - **License:** [More Information Needed]
131
- - **Finetuned from model [optional]:** [More Information Needed]
132
-
133
- ### Model Sources [optional]
134
-
135
- <!-- Provide the basic links for the model. -->
136
-
137
- - **Repository:** [More Information Needed]
138
- - **Paper [optional]:** [More Information Needed]
139
- - **Demo [optional]:** [More Information Needed]
140
-
141
- ## Uses
142
-
143
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
144
-
145
- ### Direct Use
146
-
147
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
148
-
149
- [More Information Needed]
150
-
151
- ### Downstream Use [optional]
152
-
153
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
154
-
155
- [More Information Needed]
156
-
157
- ### Out-of-Scope Use
158
-
159
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
160
-
161
- [More Information Needed]
162
-
163
- ## Bias, Risks, and Limitations
164
-
165
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
166
-
167
- [More Information Needed]
168
-
169
- ### Recommendations
170
-
171
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
172
-
173
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
174
-
175
- ## How to Get Started with the Model
176
-
177
- Use the code below to get started with the model.
178
-
179
- [More Information Needed]
180
-
181
- ## Training Details
182
-
183
- ### Training Data
184
-
185
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
186
-
187
- [More Information Needed]
188
-
189
- ### Training Procedure
190
-
191
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
192
-
193
- #### Preprocessing [optional]
194
-
195
- [More Information Needed]
196
-
197
-
198
- #### Training Hyperparameters
199
-
200
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
201
-
202
- #### Speeds, Sizes, Times [optional]
203
-
204
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
205
-
206
- [More Information Needed]
207
-
208
- ## Evaluation
209
-
210
- <!-- This section describes the evaluation protocols and provides the results. -->
211
-
212
- ### Testing Data, Factors & Metrics
213
-
214
- #### Testing Data
215
-
216
- <!-- This should link to a Dataset Card if possible. -->
217
-
218
- [More Information Needed]
219
-
220
- #### Factors
221
-
222
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
223
-
224
- [More Information Needed]
225
-
226
- #### Metrics
227
-
228
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
229
-
230
- [More Information Needed]
231
-
232
- ### Results
233
-
234
- [More Information Needed]
235
-
236
- #### Summary
237
-
238
-
239
-
240
- ## Model Examination [optional]
241
-
242
- <!-- Relevant interpretability work for the model goes here -->
243
-
244
- [More Information Needed]
245
-
246
- ## Environmental Impact
247
-
248
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
249
-
250
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
251
-
252
- - **Hardware Type:** [More Information Needed]
253
- - **Hours used:** [More Information Needed]
254
- - **Cloud Provider:** [More Information Needed]
255
- - **Compute Region:** [More Information Needed]
256
- - **Carbon Emitted:** [More Information Needed]
257
-
258
- ## Technical Specifications [optional]
259
-
260
- ### Model Architecture and Objective
261
-
262
- [More Information Needed]
263
-
264
- ### Compute Infrastructure
265
-
266
- [More Information Needed]
267
-
268
- #### Hardware
269
-
270
- [More Information Needed]
271
-
272
- #### Software
273
-
274
- [More Information Needed]
275
-
276
- ## Citation [optional]
277
-
278
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
279
-
280
- **BibTeX:**
281
-
282
- [More Information Needed]
283
-
284
- **APA:**
285
-
286
- [More Information Needed]
287
-
288
- ## Glossary [optional]
289
-
290
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
291
-
292
- [More Information Needed]
293
-
294
- ## More Information [optional]
295
-
296
- [More Information Needed]
297
-
298
- ## Model Card Authors [optional]
299
-
300
- [More Information Needed]
301
-
302
- ## Model Card Contact
303
-
304
- [More Information Needed]
305
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
306
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__multimaster-7b-v6)
307
 
 
108
  name: Open LLM Leaderboard
109
  ---
110
 
111
+ # Multi Master 7B v6
112
+
113
+ ![img](./multimaster.png)
114
+
115
+ A quick multi-disciplinary moe model. This is part of a series of models built to test the gate tuning for mixtral style moe models.
116
+
117
+ # Prompting
118
+
119
+ ## Prompt Template for alpaca style
120
+
121
+ ```
122
+ ### Instruction:
123
+
124
+ <prompt> (without the <>)
125
+
126
+ ### Response:
127
+ ```
128
+
129
+ ## Sample Code
130
+
131
+ ```python
132
+ import torch
133
+ from transformers import AutoModelForCausalLM, AutoTokenizer
134
+
135
+ torch.set_default_device("cuda")
136
+
137
+ model = AutoModelForCausalLM.from_pretrained("ibivibiv/multimaster-7b", torch_dtype="auto", device_config='auto')
138
+ tokenizer = AutoTokenizer.from_pretrained("ibivibiv/multimaster-7b")
139
+
140
+ inputs = tokenizer("### Instruction: Who would when in an arm wrestling match between Abraham Lincoln and Chuck Norris?\nA. Abraham Lincoln \nB. Chuck Norris\n### Response:\n", return_tensors="pt", return_attention_mask=False)
141
+
142
+ outputs = model.generate(**inputs, max_length=200)
143
+ text = tokenizer.batch_decode(outputs)[0]
144
+ print(text)
145
+ ```
146
+
147
+ # Model Details
148
+ * **Trained by**: [ibivibiv](https://huggingface.co/ibivibiv)
149
+ * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
150
+ * **Model type:** **multimaster-7b** is a lora tuned version of openchat/openchat-3.5-0106 with the adapter merged back into the main model
151
+ * **Language(s)**: English
152
+ * **Purpose**: This model is a focus on multi-disciplinary model tuning
153
+
154
+ # Benchmark Scores
155
+
156
+ coming soon
157
+
158
+ ## Citations
159
+
160
+ ```
161
+ @misc{open-llm-leaderboard,
162
+ author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
163
+ title = {Open LLM Leaderboard},
164
+ year = {2023},
165
+ publisher = {Hugging Face},
166
+ howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
167
+ }
168
+ ```
169
+ ```
170
+ @software{eval-harness,
171
+ author = {Gao, Leo and
172
+ Tow, Jonathan and
173
+ Biderman, Stella and
174
+ Black, Sid and
175
+ DiPofi, Anthony and
176
+ Foster, Charles and
177
+ Golding, Laurence and
178
+ Hsu, Jeffrey and
179
+ McDonell, Kyle and
180
+ Muennighoff, Niklas and
181
+ Phang, Jason and
182
+ Reynolds, Laria and
183
+ Tang, Eric and
184
+ Thite, Anish and
185
+ Wang, Ben and
186
+ Wang, Kevin and
187
+ Zou, Andy},
188
+ title = {A framework for few-shot language model evaluation},
189
+ month = sep,
190
+ year = 2021,
191
+ publisher = {Zenodo},
192
+ version = {v0.0.1},
193
+ doi = {10.5281/zenodo.5371628},
194
+ url = {https://doi.org/10.5281/zenodo.5371628}
195
+ }
196
+ ```
197
+ ```
198
+ @misc{clark2018think,
199
+ title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
200
+ author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
201
+ year={2018},
202
+ eprint={1803.05457},
203
+ archivePrefix={arXiv},
204
+ primaryClass={cs.AI}
205
+ }
206
+ ```
207
+ ```
208
+ @misc{zellers2019hellaswag,
209
+ title={HellaSwag: Can a Machine Really Finish Your Sentence?},
210
+ author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
211
+ year={2019},
212
+ eprint={1905.07830},
213
+ archivePrefix={arXiv},
214
+ primaryClass={cs.CL}
215
+ }
216
+ ```
217
+ ```
218
+ @misc{hendrycks2021measuring,
219
+ title={Measuring Massive Multitask Language Understanding},
220
+ author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
221
+ year={2021},
222
+ eprint={2009.03300},
223
+ archivePrefix={arXiv},
224
+ primaryClass={cs.CY}
225
+ }
226
+ ```
227
+ ```
228
+ @misc{lin2022truthfulqa,
229
+ title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
230
+ author={Stephanie Lin and Jacob Hilton and Owain Evans},
231
+ year={2022},
232
+ eprint={2109.07958},
233
+ archivePrefix={arXiv},
234
+ primaryClass={cs.CL}
235
+ }
236
+ ```
237
+ ```
238
+ @misc{DBLP:journals/corr/abs-1907-10641,
239
+ title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale},
240
+ author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi},
241
+ year={2019},
242
+ eprint={1907.10641},
243
+ archivePrefix={arXiv},
244
+ primaryClass={cs.CL}
245
+ }
246
+ ```
247
+ ```
248
+ @misc{DBLP:journals/corr/abs-2110-14168,
249
+ title={Training Verifiers to Solve Math Word Problems},
250
+ author={Karl Cobbe and
251
+ Vineet Kosaraju and
252
+ Mohammad Bavarian and
253
+ Mark Chen and
254
+ Heewoo Jun and
255
+ Lukasz Kaiser and
256
+ Matthias Plappert and
257
+ Jerry Tworek and
258
+ Jacob Hilton and
259
+ Reiichiro Nakano and
260
+ Christopher Hesse and
261
+ John Schulman},
262
+ year={2021},
263
+ eprint={2110.14168},
264
+ archivePrefix={arXiv},
265
+ primaryClass={cs.CL}
266
+ }
267
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
268
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
269
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__multimaster-7b-v6)
270