Abe13 commited on
Commit
c53020b
1 Parent(s): c50889b

Upload model

Browse files
Files changed (1) hide show
  1. README.md +200 -129
README.md CHANGED
@@ -1,148 +1,219 @@
1
  ---
2
- license: apache-2.0
3
  base_model: Open-Orca/Mistral-7B-OpenOrca
4
- tags:
5
- - generated_from_trainer
6
- model-index:
7
- - name: juni-Mistral-7B-OpenOrca
8
- results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
 
14
- # juni-Mistral-7B-OpenOrca
15
 
16
- This model is a fine-tuned version of [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) on the None dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: 3.0758
19
 
20
- ## Model description
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
25
 
26
- More information needed
27
 
28
- ## Training and evaluation data
29
 
30
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## Training procedure
33
 
34
- ### Training hyperparameters
35
-
36
- The following hyperparameters were used during training:
37
- - learning_rate: 0.0001
38
- - train_batch_size: 2
39
- - eval_batch_size: 2
40
- - seed: 42
41
- - gradient_accumulation_steps: 8
42
- - total_train_batch_size: 16
43
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: linear
45
- - num_epochs: 10
46
-
47
- ### Training results
48
-
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | 1.7594 | 0.11 | 1 | 3.4155 |
52
- | 1.7761 | 0.22 | 2 | 3.3643 |
53
- | 1.6344 | 0.32 | 3 | 3.3129 |
54
- | 1.8145 | 0.43 | 4 | 3.2624 |
55
- | 1.7308 | 0.54 | 5 | 3.2462 |
56
- | 1.6688 | 0.65 | 6 | 3.2282 |
57
- | 1.8082 | 0.76 | 7 | 3.2052 |
58
- | 1.5884 | 0.86 | 8 | 3.1957 |
59
- | 1.6247 | 0.97 | 9 | 3.1926 |
60
- | 1.7539 | 1.08 | 10 | 3.1759 |
61
- | 1.6578 | 1.19 | 11 | 3.1674 |
62
- | 1.661 | 1.3 | 12 | 3.1829 |
63
- | 1.5935 | 1.41 | 13 | 3.1785 |
64
- | 1.5209 | 1.51 | 14 | 3.1687 |
65
- | 1.6052 | 1.62 | 15 | 3.1504 |
66
- | 1.495 | 1.73 | 16 | 3.1539 |
67
- | 1.5238 | 1.84 | 17 | 3.1357 |
68
- | 1.5698 | 1.95 | 18 | 3.1196 |
69
- | 1.3628 | 2.05 | 19 | 3.1099 |
70
- | 1.5966 | 2.16 | 20 | 3.1170 |
71
- | 1.5713 | 2.27 | 21 | 3.1327 |
72
- | 1.5321 | 2.38 | 22 | 3.1060 |
73
- | 1.5511 | 2.49 | 23 | 3.1153 |
74
- | 1.5605 | 2.59 | 24 | 3.0925 |
75
- | 1.515 | 2.7 | 25 | 3.1066 |
76
- | 1.4646 | 2.81 | 26 | 3.1005 |
77
- | 1.3957 | 2.92 | 27 | 3.1305 |
78
- | 1.4377 | 3.03 | 28 | 3.1143 |
79
- | 1.4452 | 3.14 | 29 | 3.1472 |
80
- | 1.4925 | 3.24 | 30 | 3.1050 |
81
- | 1.4749 | 3.35 | 31 | 3.1264 |
82
- | 1.5017 | 3.46 | 32 | 3.1107 |
83
- | 1.5082 | 3.57 | 33 | 3.1000 |
84
- | 1.4657 | 3.68 | 34 | 3.1220 |
85
- | 1.2359 | 3.78 | 35 | 3.1199 |
86
- | 1.4095 | 3.89 | 36 | 3.0966 |
87
- | 1.5437 | 4.0 | 37 | 3.0847 |
88
- | 1.339 | 4.11 | 38 | 3.1319 |
89
- | 1.3762 | 4.22 | 39 | 3.0917 |
90
- | 1.3964 | 4.32 | 40 | 3.0947 |
91
- | 1.4472 | 4.43 | 41 | 3.1034 |
92
- | 1.3863 | 4.54 | 42 | 3.1100 |
93
- | 1.434 | 4.65 | 43 | 3.1018 |
94
- | 1.5171 | 4.76 | 44 | 3.0831 |
95
- | 1.215 | 4.86 | 45 | 3.0755 |
96
- | 1.4791 | 4.97 | 46 | 3.0790 |
97
- | 1.3341 | 5.08 | 47 | 3.0816 |
98
- | 1.3899 | 5.19 | 48 | 3.0909 |
99
- | 1.3621 | 5.3 | 49 | 3.0668 |
100
- | 1.4034 | 5.41 | 50 | 3.0818 |
101
- | 1.3541 | 5.51 | 51 | 3.0512 |
102
- | 1.2916 | 5.62 | 52 | 3.0861 |
103
- | 1.3359 | 5.73 | 53 | 3.0695 |
104
- | 1.3962 | 5.84 | 54 | 3.0544 |
105
- | 1.3537 | 5.95 | 55 | 3.0808 |
106
- | 1.2551 | 6.05 | 56 | 3.0733 |
107
- | 1.4321 | 6.16 | 57 | 3.0481 |
108
- | 1.3511 | 6.27 | 58 | 3.0660 |
109
- | 1.4584 | 6.38 | 59 | 3.0385 |
110
- | 1.1897 | 6.49 | 60 | 3.0632 |
111
- | 1.3157 | 6.59 | 61 | 3.0724 |
112
- | 1.2269 | 6.7 | 62 | 3.0747 |
113
- | 1.4017 | 6.81 | 63 | 3.0593 |
114
- | 1.357 | 6.92 | 64 | 3.0655 |
115
- | 1.4048 | 7.03 | 65 | 3.0649 |
116
- | 1.308 | 7.14 | 66 | 3.0707 |
117
- | 1.2297 | 7.24 | 67 | 3.0561 |
118
- | 1.2186 | 7.35 | 68 | 3.0729 |
119
- | 1.2583 | 7.46 | 69 | 3.0800 |
120
- | 1.4283 | 7.57 | 70 | 3.0698 |
121
- | 1.224 | 7.68 | 71 | 3.0787 |
122
- | 1.2403 | 7.78 | 72 | 3.0669 |
123
- | 1.2677 | 7.89 | 73 | 3.0615 |
124
- | 1.3997 | 8.0 | 74 | 3.0658 |
125
- | 1.2593 | 8.11 | 75 | 3.0714 |
126
- | 1.1997 | 8.22 | 76 | 3.0752 |
127
- | 1.2961 | 8.32 | 77 | 3.0662 |
128
- | 1.3297 | 8.43 | 78 | 3.0637 |
129
- | 1.2994 | 8.54 | 79 | 3.0660 |
130
- | 1.3623 | 8.65 | 80 | 3.0626 |
131
- | 1.1564 | 8.76 | 81 | 3.0658 |
132
- | 1.3229 | 8.86 | 82 | 3.0674 |
133
- | 1.1027 | 8.97 | 83 | 3.0688 |
134
- | 1.3022 | 9.08 | 84 | 3.0699 |
135
- | 1.2523 | 9.19 | 85 | 3.0684 |
136
- | 1.198 | 9.3 | 86 | 3.0687 |
137
- | 0.9721 | 9.41 | 87 | 3.0730 |
138
- | 1.2124 | 9.51 | 88 | 3.0756 |
139
- | 1.3073 | 9.62 | 89 | 3.0761 |
140
- | 1.2945 | 9.73 | 90 | 3.0758 |
141
 
 
 
 
 
 
 
 
 
 
 
 
142
 
143
  ### Framework versions
144
 
145
- - Transformers 4.34.1
146
- - Pytorch 2.0.1+cu118
147
- - Datasets 2.14.6
148
- - Tokenizers 0.14.1
 
1
  ---
2
+ library_name: peft
3
  base_model: Open-Orca/Mistral-7B-OpenOrca
 
 
 
 
 
4
  ---
5
 
6
+ # Model Card for Model ID
 
7
 
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
 
10
 
 
11
 
12
+ ## Model Details
13
 
14
+ ### Model Description
15
 
16
+ <!-- Provide a longer summary of what this model is. -->
17
 
 
18
 
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Shared by [optional]:** [More Information Needed]
22
+ - **Model type:** [More Information Needed]
23
+ - **Language(s) (NLP):** [More Information Needed]
24
+ - **License:** [More Information Needed]
25
+ - **Finetuned from model [optional]:** [More Information Needed]
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [More Information Needed]
32
+ - **Paper [optional]:** [More Information Needed]
33
+ - **Demo [optional]:** [More Information Needed]
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+
39
+ ### Direct Use
40
+
41
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
+
43
+ [More Information Needed]
44
+
45
+ ### Downstream Use [optional]
46
+
47
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Out-of-Scope Use
52
+
53
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
+
55
+ [More Information Needed]
56
+
57
+ ## Bias, Risks, and Limitations
58
+
59
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ### Recommendations
64
+
65
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
+
67
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
+
69
+ ## How to Get Started with the Model
70
+
71
+ Use the code below to get started with the model.
72
+
73
+ [More Information Needed]
74
+
75
+ ## Training Details
76
+
77
+ ### Training Data
78
+
79
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
+
81
+ [More Information Needed]
82
+
83
+ ### Training Procedure
84
+
85
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
+
87
+ #### Preprocessing [optional]
88
+
89
+ [More Information Needed]
90
+
91
+
92
+ #### Training Hyperparameters
93
+
94
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
+
96
+ #### Speeds, Sizes, Times [optional]
97
+
98
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
+
100
+ [More Information Needed]
101
+
102
+ ## Evaluation
103
+
104
+ <!-- This section describes the evaluation protocols and provides the results. -->
105
+
106
+ ### Testing Data, Factors & Metrics
107
+
108
+ #### Testing Data
109
+
110
+ <!-- This should link to a Data Card if possible. -->
111
+
112
+ [More Information Needed]
113
+
114
+ #### Factors
115
+
116
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Metrics
121
+
122
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
+
124
+ [More Information Needed]
125
+
126
+ ### Results
127
+
128
+ [More Information Needed]
129
+
130
+ #### Summary
131
+
132
+
133
+
134
+ ## Model Examination [optional]
135
+
136
+ <!-- Relevant interpretability work for the model goes here -->
137
+
138
+ [More Information Needed]
139
+
140
+ ## Environmental Impact
141
+
142
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
+
144
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
+
146
+ - **Hardware Type:** [More Information Needed]
147
+ - **Hours used:** [More Information Needed]
148
+ - **Cloud Provider:** [More Information Needed]
149
+ - **Compute Region:** [More Information Needed]
150
+ - **Carbon Emitted:** [More Information Needed]
151
+
152
+ ## Technical Specifications [optional]
153
+
154
+ ### Model Architecture and Objective
155
+
156
+ [More Information Needed]
157
+
158
+ ### Compute Infrastructure
159
+
160
+ [More Information Needed]
161
+
162
+ #### Hardware
163
+
164
+ [More Information Needed]
165
+
166
+ #### Software
167
+
168
+ [More Information Needed]
169
+
170
+ ## Citation [optional]
171
+
172
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
+
174
+ **BibTeX:**
175
+
176
+ [More Information Needed]
177
+
178
+ **APA:**
179
+
180
+ [More Information Needed]
181
+
182
+ ## Glossary [optional]
183
+
184
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
+
186
+ [More Information Needed]
187
+
188
+ ## More Information [optional]
189
+
190
+ [More Information Needed]
191
+
192
+ ## Model Card Authors [optional]
193
+
194
+ [More Information Needed]
195
+
196
+ ## Model Card Contact
197
+
198
+ [More Information Needed]
199
+
200
 
201
  ## Training procedure
202
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
203
 
204
+ The following `bitsandbytes` quantization config was used during training:
205
+ - quant_method: bitsandbytes
206
+ - load_in_8bit: False
207
+ - load_in_4bit: True
208
+ - llm_int8_threshold: 6.0
209
+ - llm_int8_skip_modules: None
210
+ - llm_int8_enable_fp32_cpu_offload: False
211
+ - llm_int8_has_fp16_weight: False
212
+ - bnb_4bit_quant_type: nf4
213
+ - bnb_4bit_use_double_quant: False
214
+ - bnb_4bit_compute_dtype: float16
215
 
216
  ### Framework versions
217
 
218
+
219
+ - PEFT 0.6.0.dev0