theblackcat102 commited on
Commit
f135ed8
1 Parent(s): 8ec9fec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +275 -0
README.md CHANGED
@@ -1,3 +1,278 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # Pythia 12B SFT
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
10
+
11
+ # Model Details
12
+
13
+ ## Model Description
14
+
15
+ <!-- Provide a longer summary of what this model is. -->
16
+
17
+
18
+
19
+ - **Developed by:** [More Information Needed]
20
+ - **Shared by [optional]:** [More Information Needed]
21
+ - **Model type:** [More Information Needed]
22
+ - **Language(s) (NLP):** [More Information Needed]
23
+ - **License:** [More Information Needed]
24
+ - **Finetuned from model [optional]:** [More Information Needed]
25
+
26
+ ## Model Sources [optional]
27
+
28
+ <!-- Provide the basic links for the model. -->
29
+
30
+ - **Repository:** [More Information Needed]
31
+ - **Paper [optional]:** [More Information Needed]
32
+ - **Demo [optional]:** [More Information Needed]
33
+
34
+ # Uses
35
+
36
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
+
38
+ ## Direct Use
39
+
40
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
41
+
42
+ [More Information Needed]
43
+
44
+ ## Downstream Use [optional]
45
+
46
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
47
+
48
+ [More Information Needed]
49
+
50
+ ## Out-of-Scope Use
51
+
52
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
53
+
54
+ [More Information Needed]
55
+
56
+ # Bias, Risks, and Limitations
57
+
58
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Recommendations
63
+
64
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
65
+
66
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
67
+
68
+ ## How to Get Started with the Model
69
+
70
+ Use the code below to get started with the model.
71
+
72
+ [More Information Needed]
73
+
74
+ # Training Details
75
+
76
+ ## Training Data
77
+
78
+ Trainining data includes 2023-02-10 openassistant unfiltered conversation tree dump
79
+
80
+ ## Training Procedure
81
+
82
+ ```
83
+ deepspeed trainer_sft.py --configs defaults pythia-80 --deepspeed
84
+ ```
85
+
86
+ ### Preprocessing [optional]
87
+
88
+ [More Information Needed]
89
+
90
+
91
+ ### Training Hyperparameters
92
+
93
+ deepspeed stage 2
94
+
95
+ config are as follows:
96
+
97
+ ```
98
+ defaults:
99
+ learning_rate: 1e-5
100
+ gradient_checkpointing: false
101
+ gradient_accumulation_steps: 32
102
+ per_device_train_batch_size: 2
103
+ per_device_eval_batch_size: 2
104
+ weight_decay: 0.00
105
+ warmup_steps: 600
106
+ eval_steps: 250
107
+ save_steps: 250
108
+ max_length: 512
109
+ num_train_epochs: 2
110
+ logging_steps: 10
111
+ max_grad_norm: 2.0
112
+ save_total_limit: 4
113
+ fp16: true
114
+ eval_accumulation_steps:
115
+ freeze_layer:
116
+ datasets:
117
+ - gsm8k_hard
118
+ - webgpt
119
+ - squad_v2
120
+ - adversarial_qa
121
+ - private_tuning
122
+ - oa_translated
123
+ - prosocial_dialogue
124
+ - math_qa
125
+ - wikihow
126
+ - joke
127
+ - gsm8k
128
+ - ted_trans_en-hi
129
+ - ted_trans_de-ja
130
+ - ted_trans_nl-en
131
+ - ted_trans_en-ja
132
+ - ted_trans_en-es
133
+ - ted_trans_en-ms
134
+ - xsum:
135
+ fraction: 0.5
136
+ - cnn_dailymail:
137
+ fraction: 0.5
138
+ - multi_news:
139
+ fraction: 0.5
140
+ - tldr_news:
141
+ fraction: 0.5
142
+ - scitldr:
143
+ fraction: 0.5
144
+ - samsum:
145
+ fraction: 0.5
146
+ - debate_sum:
147
+ fraction: 0.5
148
+ - billsum:
149
+ fraction: 0.5
150
+ - wmt2019_zh-en:
151
+ fraction: 0.9
152
+ - wmt2019_ru-en:
153
+ fraction: 0.9
154
+ - wmt2019_de-en:
155
+ fraction: 0.9
156
+ - wmt2019_fr-de:
157
+ fraction: 0.9
158
+ - essay_instruction
159
+ - reddit_eli5
160
+ - reddit_askh
161
+ - reddit_asks
162
+ loss_fn: CrossEntropyLoss
163
+ log_dir: "base"
164
+ quantization: false
165
+ seq2seqmodel: false
166
+ poly_eps: 1.0
167
+ fuse_gelu: true
168
+ log_wandb: true
169
+ samples_mixing: true # uses collator that mixes samples in the batch to create a single sample with possible multiple tasks within
170
+ verbose: false
171
+
172
+ pythia-80:
173
+ learning_rate: 5e-6
174
+ model_name: EleutherAI/pythia-12b-deduped
175
+ weight_decay: 0.01
176
+ max_length: 520
177
+ warmup_steps: 1000
178
+ gradient_checkpointing: false
179
+ gradient_accumulation_steps: 20
180
+ per_device_train_batch_size: 6
181
+ per_device_eval_batch_size: 6
182
+
183
+ ```
184
+
185
+ ### Speeds, Sizes, Times [optional]
186
+
187
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
188
+
189
+ [More Information Needed]
190
+
191
+ # Evaluation
192
+
193
+ <!-- This section describes the evaluation protocols and provides the results. -->
194
+
195
+ ## Testing Data, Factors & Metrics
196
+
197
+ ### Testing Data
198
+
199
+ <!-- This should link to a Data Card if possible. -->
200
+
201
+ [More Information Needed]
202
+
203
+ ### Factors
204
+
205
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
206
+
207
+ [More Information Needed]
208
+
209
+ ### Metrics
210
+
211
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
212
+
213
+ [More Information Needed]
214
+
215
+ ## Results
216
+
217
+ [More Information Needed]
218
+
219
+ ### Summary
220
+
221
+
222
+
223
+ # Model Examination [optional]
224
+
225
+ <!-- Relevant interpretability work for the model goes here -->
226
+
227
+ [More Information Needed]
228
+
229
+
230
+ # Technical Specifications [optional]
231
+
232
+ ## Model Architecture and Objective
233
+
234
+ Pythia 12B deduppped model
235
+
236
+ ## Compute Infrastructure
237
+
238
+ Stability AWS Slurm Cluster
239
+
240
+ ### Hardware
241
+
242
+ 8 x A100 80G
243
+
244
+ ### Software
245
+
246
+ [More Information Needed]
247
+
248
+ # Citation [optional]
249
+
250
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
251
+
252
+ **BibTeX:**
253
+
254
+ [More Information Needed]
255
+
256
+ **APA:**
257
+
258
+ [More Information Needed]
259
+
260
+ # Glossary [optional]
261
+
262
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
263
+
264
+ [More Information Needed]
265
+
266
+ # More Information [optional]
267
+
268
+ [More Information Needed]
269
+
270
+ # Model Card Authors [optional]
271
+
272
+ [More Information Needed]
273
+
274
+ # Model Card Contact
275
+
276
+ [More Information Needed]
277
+
278
+