metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: results
results: []
results
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6829
- Accuracy: 0.6045
- Precision: [0.7241379310338585, 0.50549450549395, 0.6101694915243895]
- Recall: [0.6829268292677374, 0.6388888888880015, 0.44444444444389575]
- Micro F1: 0.6125
- Macro F1: 0.5939
- Confusion Matrix: [[155, 100], [110, 166]]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Micro F1 | Macro F1 | Confusion Matrix |
---|---|---|---|---|---|---|---|---|---|
0.6005 | 1.0 | 138 | 0.6847 | 0.5729 | [0.8235294117634948, 0.642857142852551, 0.5434782608683837] | [0.6153846153839391, 0.11999999999984, 0.49999999999899997] | 0.5233 | 0.4758 | [[130, 38], [126, 90]] |
0.5435 | 2.0 | 276 | 0.6608 | 0.6276 | [0.790123456789148, 0.6779661016937661, 0.5531914893605251] | [0.7032967032959304, 0.5333333333326222, 0.51999999999896] | 0.6452 | 0.6258 | [[111, 57], [86, 130]] |
0.5863 | 3.0 | 414 | 0.6713 | 0.6354 | [0.8260869565205419, 0.7272727272714049, 0.5535714285704401] | [0.626373626372938, 0.5333333333326222, 0.61999999999876] | 0.6465 | 0.6376 | [[116, 52], [88, 128]] |
Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1