lucasvw commited on
Commit
8a8b330
1 Parent(s): 56f7345

End of training

Browse files
Files changed (2) hide show
  1. README.md +151 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
8
+ model-index:
9
+ - name: tinyllama-1.1B_alpaca_2k_lora
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ # Adapted from https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/tiny-llama/lora.yml
22
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
23
+ model_type: LlamaForCausalLM
24
+ tokenizer_type: LlamaTokenizer
25
+
26
+ load_in_8bit: true
27
+ load_in_4bit: false
28
+ strict: false
29
+
30
+ datasets:
31
+ - path: mhenrichsen/alpaca_2k_test
32
+ type: alpaca
33
+ dataset_prepared_path:
34
+ val_set_size: 0.05
35
+ output_dir: ./outputs/lora-out
36
+ hub_model_id: lucasvw/tinyllama-1.1B_alpaca_2k_lora
37
+
38
+ wandb_project: tinyllama-1.1B_alpaca_2k_lora
39
+ wandb_entity: lucasvw
40
+
41
+ sequence_len: 4096
42
+ sample_packing: true
43
+ eval_sample_packing: false
44
+ pad_to_sequence_len: true
45
+
46
+ adapter: lora
47
+ lora_model_dir:
48
+ lora_r: 32
49
+ lora_alpha: 16
50
+ lora_dropout: 0.05
51
+ lora_target_linear: true
52
+ lora_fan_in_fan_out:
53
+
54
+ gradient_accumulation_steps: 4
55
+ micro_batch_size: 2
56
+ num_epochs: 4
57
+ optimizer: adamw_bnb_8bit
58
+ lr_scheduler: cosine
59
+ learning_rate: 0.0002
60
+
61
+ train_on_inputs: false
62
+ group_by_length: false
63
+ bf16: auto
64
+ fp16:
65
+ tf32: false
66
+
67
+ gradient_checkpointing: true
68
+ early_stopping_patience:
69
+ resume_from_checkpoint:
70
+ local_rank:
71
+ logging_steps: 1
72
+ xformers_attention:
73
+ flash_attention: true
74
+
75
+ warmup_steps: 10
76
+ evals_per_epoch: 4
77
+ saves_per_epoch: 1
78
+ debug:
79
+ deepspeed:
80
+ weight_decay: 0.0
81
+ fsdp:
82
+ fsdp_config:
83
+ special_tokens:
84
+ ```
85
+
86
+ </details><br>
87
+
88
+ # tinyllama-1.1B_alpaca_2k_lora
89
+
90
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
91
+ It achieves the following results on the evaluation set:
92
+ - Loss: 1.2132
93
+
94
+ ## Model description
95
+
96
+ More information needed
97
+
98
+ ## Intended uses & limitations
99
+
100
+ More information needed
101
+
102
+ ## Training and evaluation data
103
+
104
+ More information needed
105
+
106
+ ## Training procedure
107
+
108
+ ### Training hyperparameters
109
+
110
+ The following hyperparameters were used during training:
111
+ - learning_rate: 0.0002
112
+ - train_batch_size: 2
113
+ - eval_batch_size: 2
114
+ - seed: 42
115
+ - gradient_accumulation_steps: 4
116
+ - total_train_batch_size: 8
117
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
118
+ - lr_scheduler_type: cosine
119
+ - lr_scheduler_warmup_steps: 10
120
+ - num_epochs: 4
121
+
122
+ ### Training results
123
+
124
+ | Training Loss | Epoch | Step | Validation Loss |
125
+ |:-------------:|:------:|:----:|:---------------:|
126
+ | 1.4615 | 0.08 | 1 | 1.4899 |
127
+ | 1.3851 | 0.24 | 3 | 1.4860 |
128
+ | 1.3667 | 0.48 | 6 | 1.4396 |
129
+ | 1.2684 | 0.72 | 9 | 1.3410 |
130
+ | 1.2274 | 0.96 | 12 | 1.2938 |
131
+ | 1.2519 | 1.16 | 15 | 1.2810 |
132
+ | 1.2263 | 1.4 | 18 | 1.2534 |
133
+ | 1.1355 | 1.6400 | 21 | 1.2357 |
134
+ | 1.2697 | 1.88 | 24 | 1.2260 |
135
+ | 1.1492 | 2.08 | 27 | 1.2217 |
136
+ | 1.1531 | 2.32 | 30 | 1.2216 |
137
+ | 1.1951 | 2.56 | 33 | 1.2184 |
138
+ | 1.1118 | 2.8 | 36 | 1.2158 |
139
+ | 1.1514 | 3.04 | 39 | 1.2127 |
140
+ | 1.1893 | 3.24 | 42 | 1.2124 |
141
+ | 1.1014 | 3.48 | 45 | 1.2115 |
142
+ | 1.1892 | 3.7200 | 48 | 1.2132 |
143
+
144
+
145
+ ### Framework versions
146
+
147
+ - PEFT 0.10.0
148
+ - Transformers 4.40.2
149
+ - Pytorch 2.1.2+cu118
150
+ - Datasets 2.19.1
151
+ - Tokenizers 0.19.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8903394f84d3a965f7862d7399fc3605e5749991887ea3d57fb2fd99368e7ab6
3
+ size 101036698