RoBERTa_token_classification_AraiEval24_Eng_single
This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9212
- Precision: 0.1545
- Recall: 0.1074
- F1: 0.1268
- Accuracy: 0.8295
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.6773 | 1.0 | 2189 | 0.7786 | 0.1623 | 0.0241 | 0.0420 | 0.8512 |
0.5689 | 2.0 | 4378 | 0.7070 | 0.1705 | 0.0275 | 0.0474 | 0.8517 |
0.5387 | 3.0 | 6567 | 0.7125 | 0.1550 | 0.0632 | 0.0897 | 0.8468 |
0.4442 | 4.0 | 8756 | 0.7490 | 0.1639 | 0.0637 | 0.0917 | 0.8472 |
0.4061 | 5.0 | 10945 | 0.7792 | 0.2027 | 0.0760 | 0.1105 | 0.8508 |
0.357 | 6.0 | 13134 | 0.8296 | 0.1513 | 0.0980 | 0.1190 | 0.8354 |
0.3024 | 7.0 | 15323 | 0.8554 | 0.1638 | 0.0912 | 0.1172 | 0.8386 |
0.276 | 8.0 | 17512 | 0.8761 | 0.1602 | 0.0985 | 0.1220 | 0.8348 |
0.2506 | 9.0 | 19701 | 0.8938 | 0.1533 | 0.1140 | 0.1307 | 0.8268 |
0.221 | 10.0 | 21890 | 0.9212 | 0.1545 | 0.1074 | 0.1268 | 0.8295 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.13.3
- Downloads last month
- 30