feulf commited on
Commit
637bdae
1 Parent(s): d3187fe

End of training

Browse files
Files changed (2) hide show
  1. README.md +244 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: codellama/CodeLlama-7b-hf
8
+ model-index:
9
+ - name: EvolCodeLlama-7b
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.3.0`
20
+ ```yaml
21
+ base_model: codellama/CodeLlama-7b-hf
22
+ base_model_config: codellama/CodeLlama-7b-hf
23
+ model_type: LlamaForCausalLM
24
+ tokenizer_type: LlamaTokenizer
25
+ is_llama_derived_model: true
26
+ hub_model_id: EvolCodeLlama-7b
27
+
28
+ load_in_8bit: false
29
+ load_in_4bit: true
30
+ strict: false
31
+
32
+ datasets:
33
+ - path: mlabonne/Evol-Instruct-Python-1k
34
+ type: alpaca
35
+ dataset_prepared_path: last_run_prepared
36
+ val_set_size: 0.02
37
+ output_dir: ./qlora-out
38
+
39
+ adapter: qlora
40
+ lora_model_dir:
41
+
42
+ sequence_len: 2048
43
+ sample_packing: true
44
+
45
+ lora_r: 32
46
+ lora_alpha: 16
47
+ lora_dropout: 0.05
48
+ lora_target_modules:
49
+ lora_target_linear: true
50
+ lora_fan_in_fan_out:
51
+
52
+ wandb_project: axolotl
53
+ wandb_entity:
54
+ wandb_watch:
55
+ wandb_run_id:
56
+ wandb_log_model:
57
+
58
+ gradient_accumulation_steps: 4
59
+ micro_batch_size: 2
60
+ num_epochs: 3
61
+ optimizer: paged_adamw_32bit
62
+ lr_scheduler: cosine
63
+ learning_rate: 0.0002
64
+
65
+ train_on_inputs: false
66
+ group_by_length: false
67
+ bf16: true
68
+ fp16: false
69
+ tf32: false
70
+
71
+ gradient_checkpointing: true
72
+ early_stopping_patience:
73
+ resume_from_checkpoint:
74
+ local_rank:
75
+ logging_steps: 1
76
+ xformers_attention:
77
+ flash_attention: true
78
+
79
+ warmup_steps: 100
80
+ eval_steps: 0.01
81
+ save_strategy: epoch
82
+ save_steps:
83
+ debug:
84
+ deepspeed:
85
+ weight_decay: 0.0
86
+ fsdp:
87
+ fsdp_config:
88
+ special_tokens:
89
+ bos_token: "<s>"
90
+ eos_token: "</s>"
91
+ unk_token: "<unk>"
92
+ ```
93
+
94
+ </details><br>
95
+
96
+ # EvolCodeLlama-7b
97
+
98
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
99
+ It achieves the following results on the evaluation set:
100
+ - Loss: 0.3797
101
+
102
+ ## Model description
103
+
104
+ More information needed
105
+
106
+ ## Intended uses & limitations
107
+
108
+ More information needed
109
+
110
+ ## Training and evaluation data
111
+
112
+ More information needed
113
+
114
+ ## Training procedure
115
+
116
+
117
+ The following `bitsandbytes` quantization config was used during training:
118
+ - quant_method: bitsandbytes
119
+ - load_in_8bit: False
120
+ - load_in_4bit: True
121
+ - llm_int8_threshold: 6.0
122
+ - llm_int8_skip_modules: None
123
+ - llm_int8_enable_fp32_cpu_offload: False
124
+ - llm_int8_has_fp16_weight: False
125
+ - bnb_4bit_quant_type: nf4
126
+ - bnb_4bit_use_double_quant: True
127
+ - bnb_4bit_compute_dtype: bfloat16
128
+
129
+ ### Training hyperparameters
130
+
131
+ The following hyperparameters were used during training:
132
+ - learning_rate: 0.0002
133
+ - train_batch_size: 2
134
+ - eval_batch_size: 2
135
+ - seed: 42
136
+ - gradient_accumulation_steps: 4
137
+ - total_train_batch_size: 8
138
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
139
+ - lr_scheduler_type: cosine
140
+ - lr_scheduler_warmup_steps: 100
141
+ - num_epochs: 3
142
+
143
+ ### Training results
144
+
145
+ | Training Loss | Epoch | Step | Validation Loss |
146
+ |:-------------:|:-----:|:----:|:---------------:|
147
+ | 0.3472 | 0.01 | 1 | 0.4986 |
148
+ | 0.3139 | 0.03 | 4 | 0.4985 |
149
+ | 0.2981 | 0.07 | 8 | 0.4983 |
150
+ | 0.4311 | 0.1 | 12 | 0.4979 |
151
+ | 0.3958 | 0.14 | 16 | 0.4960 |
152
+ | 0.335 | 0.17 | 20 | 0.4915 |
153
+ | 0.4286 | 0.2 | 24 | 0.4808 |
154
+ | 0.4011 | 0.24 | 28 | 0.4629 |
155
+ | 0.3269 | 0.27 | 32 | 0.4445 |
156
+ | 0.2559 | 0.31 | 36 | 0.4284 |
157
+ | 0.3786 | 0.34 | 40 | 0.4174 |
158
+ | 0.2967 | 0.37 | 44 | 0.4107 |
159
+ | 0.2677 | 0.41 | 48 | 0.4027 |
160
+ | 0.2455 | 0.44 | 52 | 0.3959 |
161
+ | 0.3267 | 0.47 | 56 | 0.3916 |
162
+ | 0.2902 | 0.51 | 60 | 0.3882 |
163
+ | 0.1845 | 0.54 | 64 | 0.3878 |
164
+ | 0.2593 | 0.58 | 68 | 0.3869 |
165
+ | 0.3104 | 0.61 | 72 | 0.3836 |
166
+ | 0.3799 | 0.64 | 76 | 0.3819 |
167
+ | 0.2059 | 0.68 | 80 | 0.3794 |
168
+ | 0.3177 | 0.71 | 84 | 0.3792 |
169
+ | 0.2307 | 0.75 | 88 | 0.3768 |
170
+ | 0.282 | 0.78 | 92 | 0.3749 |
171
+ | 0.2713 | 0.81 | 96 | 0.3738 |
172
+ | 0.2948 | 0.85 | 100 | 0.3725 |
173
+ | 0.2311 | 0.88 | 104 | 0.3713 |
174
+ | 0.2516 | 0.92 | 108 | 0.3716 |
175
+ | 0.2462 | 0.95 | 112 | 0.3715 |
176
+ | 0.2035 | 0.98 | 116 | 0.3711 |
177
+ | 0.2638 | 1.02 | 120 | 0.3712 |
178
+ | 0.2477 | 1.05 | 124 | 0.3726 |
179
+ | 0.1986 | 1.08 | 128 | 0.3682 |
180
+ | 0.2292 | 1.12 | 132 | 0.3671 |
181
+ | 0.1549 | 1.15 | 136 | 0.3680 |
182
+ | 0.1953 | 1.19 | 140 | 0.3683 |
183
+ | 0.224 | 1.22 | 144 | 0.3671 |
184
+ | 0.1941 | 1.25 | 148 | 0.3687 |
185
+ | 0.2234 | 1.29 | 152 | 0.3709 |
186
+ | 0.2659 | 1.32 | 156 | 0.3700 |
187
+ | 0.2535 | 1.36 | 160 | 0.3689 |
188
+ | 0.2115 | 1.39 | 164 | 0.3683 |
189
+ | 0.2481 | 1.42 | 168 | 0.3693 |
190
+ | 0.2101 | 1.46 | 172 | 0.3699 |
191
+ | 0.228 | 1.49 | 176 | 0.3697 |
192
+ | 0.3159 | 1.53 | 180 | 0.3680 |
193
+ | 0.2257 | 1.56 | 184 | 0.3664 |
194
+ | 0.1684 | 1.59 | 188 | 0.3670 |
195
+ | 0.2277 | 1.63 | 192 | 0.3663 |
196
+ | 0.2787 | 1.66 | 196 | 0.3668 |
197
+ | 0.2284 | 1.69 | 200 | 0.3654 |
198
+ | 0.2789 | 1.73 | 204 | 0.3640 |
199
+ | 0.2089 | 1.76 | 208 | 0.3632 |
200
+ | 0.3387 | 1.8 | 212 | 0.3633 |
201
+ | 0.2677 | 1.83 | 216 | 0.3610 |
202
+ | 0.2684 | 1.86 | 220 | 0.3609 |
203
+ | 0.2458 | 1.9 | 224 | 0.3610 |
204
+ | 0.2808 | 1.93 | 228 | 0.3602 |
205
+ | 0.2895 | 1.97 | 232 | 0.3596 |
206
+ | 0.323 | 2.0 | 236 | 0.3591 |
207
+ | 0.2105 | 2.03 | 240 | 0.3623 |
208
+ | 0.1911 | 2.07 | 244 | 0.3720 |
209
+ | 0.2888 | 2.1 | 248 | 0.3802 |
210
+ | 0.1958 | 2.13 | 252 | 0.3748 |
211
+ | 0.1785 | 2.17 | 256 | 0.3701 |
212
+ | 0.2604 | 2.2 | 260 | 0.3709 |
213
+ | 0.2212 | 2.24 | 264 | 0.3737 |
214
+ | 0.1996 | 2.27 | 268 | 0.3772 |
215
+ | 0.1567 | 2.3 | 272 | 0.3778 |
216
+ | 0.1777 | 2.34 | 276 | 0.3778 |
217
+ | 0.2642 | 2.37 | 280 | 0.3785 |
218
+ | 0.1907 | 2.4 | 284 | 0.3796 |
219
+ | 0.1637 | 2.44 | 288 | 0.3785 |
220
+ | 0.1778 | 2.47 | 292 | 0.3785 |
221
+ | 0.144 | 2.51 | 296 | 0.3789 |
222
+ | 0.1758 | 2.54 | 300 | 0.3788 |
223
+ | 0.2018 | 2.57 | 304 | 0.3784 |
224
+ | 0.3126 | 2.61 | 308 | 0.3783 |
225
+ | 0.1623 | 2.64 | 312 | 0.3790 |
226
+ | 0.223 | 2.68 | 316 | 0.3798 |
227
+ | 0.2109 | 2.71 | 320 | 0.3797 |
228
+ | 0.1606 | 2.74 | 324 | 0.3797 |
229
+ | 0.2226 | 2.78 | 328 | 0.3796 |
230
+ | 0.2068 | 2.81 | 332 | 0.3798 |
231
+ | 0.1547 | 2.85 | 336 | 0.3797 |
232
+ | 0.2513 | 2.88 | 340 | 0.3796 |
233
+ | 0.2688 | 2.91 | 344 | 0.3797 |
234
+ | 0.1481 | 2.95 | 348 | 0.3796 |
235
+ | 0.1443 | 2.98 | 352 | 0.3797 |
236
+
237
+
238
+ ### Framework versions
239
+
240
+ - PEFT 0.7.0
241
+ - Transformers 4.37.0.dev0
242
+ - Pytorch 2.0.1+cu118
243
+ - Datasets 2.16.1
244
+ - Tokenizers 0.15.0
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b992e35508b54a9ae97df5f662f8219f9b5774bdc41ab7dfe12e40fccb5c22f3
3
+ size 319977229