Tukan-1.1B-Chat-reasoning-sft
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 1.0196
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 20
- total_train_batch_size: 120
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.3366 | 0.24 | 10 | 1.2783 |
1.2563 | 0.47 | 20 | 1.2321 |
1.2289 | 0.71 | 30 | 1.2012 |
1.1837 | 0.94 | 40 | 1.1688 |
1.1534 | 1.18 | 50 | 1.1306 |
1.1254 | 1.42 | 60 | 1.1037 |
1.1011 | 1.65 | 70 | 1.0882 |
1.0825 | 1.89 | 80 | 1.0748 |
1.0876 | 2.12 | 90 | 1.0635 |
1.0716 | 2.36 | 100 | 1.0540 |
1.0517 | 2.59 | 110 | 1.0459 |
1.0289 | 2.83 | 120 | 1.0389 |
1.0564 | 3.07 | 130 | 1.0332 |
1.034 | 3.3 | 140 | 1.0288 |
1.0337 | 3.54 | 150 | 1.0253 |
1.033 | 3.77 | 160 | 1.0231 |
1.0312 | 4.01 | 170 | 1.0213 |
1.0207 | 4.25 | 180 | 1.0204 |
1.0271 | 4.48 | 190 | 1.0198 |
1.0351 | 4.72 | 200 | 1.0197 |
1.0339 | 4.95 | 210 | 1.0196 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.2.0a0+gitd925d94
- Datasets 2.14.6
- Tokenizers 0.15.0
Training procedure
Framework versions
- PEFT 0.6.1
- Downloads last month
- 7
Model tree for alexredna/Tukan-1.1B-Chat-reasoning-sft
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0