Yi_fans
This model is a fine-tuned version of Xwin-LM/Xwin-LM-13B-V0.2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0715
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 120
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.7447 | 0.02 | 10 | 2.3455 |
1.9979 | 0.04 | 20 | 1.9817 |
1.6621 | 0.07 | 30 | 1.4316 |
1.3525 | 0.09 | 40 | 1.3135 |
1.312 | 0.11 | 50 | 1.2386 |
1.243 | 0.13 | 60 | 1.1874 |
1.084 | 0.15 | 70 | 1.1472 |
1.1409 | 0.17 | 80 | 1.1229 |
1.1093 | 0.2 | 90 | 1.1000 |
1.0783 | 0.22 | 100 | 1.0889 |
1.0253 | 0.24 | 110 | 1.0776 |
1.1132 | 0.26 | 120 | 1.0715 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
Model tree for affecto/Yi_fans
Base model
Xwin-LM/Xwin-LM-13B-V0.2