metadata
tags:
- summarization
- generated_from_trainer
model-index:
- name: led-risalah_data_v8
results: []
led-risalah_data_v8
This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0169
- Rouge1 Precision: 0.8329
- Rouge1 Recall: 0.135
- Rouge1 Fmeasure: 0.2293
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 Fmeasure | Rouge1 Precision | Rouge1 Recall |
---|---|---|---|---|---|---|
1.9061 | 1.0 | 15 | 1.9704 | 0.1528 | 0.5489 | 0.0894 |
1.8015 | 2.0 | 30 | 1.7979 | 0.2037 | 0.6934 | 0.1204 |
1.6484 | 3.0 | 45 | 1.7690 | 0.2107 | 0.72 | 0.1244 |
1.3656 | 4.0 | 60 | 1.7353 | 0.223 | 0.7526 | 0.1321 |
1.1833 | 5.0 | 75 | 1.7215 | 0.2172 | 0.7498 | 0.1283 |
1.1678 | 6.0 | 90 | 1.7365 | 0.2094 | 0.7063 | 0.1241 |
1.1258 | 7.0 | 105 | 1.7643 | 0.2193 | 0.7425 | 0.1299 |
1.0591 | 8.0 | 120 | 1.7697 | 0.2184 | 0.7328 | 0.1295 |
0.8896 | 9.0 | 135 | 1.7835 | 0.2207 | 0.7391 | 0.1306 |
1.0655 | 10.0 | 150 | 1.7985 | 0.2241 | 0.7559 | 0.1325 |
0.8386 | 11.0 | 165 | 1.8309 | 0.2217 | 0.7502 | 0.1314 |
0.8968 | 12.0 | 180 | 1.8377 | 0.2147 | 0.7179 | 0.1276 |
0.7863 | 13.0 | 195 | 1.8737 | 0.2172 | 0.7293 | 0.129 |
0.6942 | 14.0 | 210 | 1.8858 | 0.2185 | 0.7489 | 0.1291 |
0.6656 | 15.0 | 225 | 1.9181 | 0.2243 | 0.7566 | 0.1328 |
0.6672 | 16.0 | 240 | 1.9407 | 0.2224 | 0.7513 | 0.1315 |
0.6405 | 17.0 | 255 | 1.9416 | 0.2151 | 0.7369 | 0.1272 |
0.7382 | 18.0 | 270 | 1.9533 | 0.2214 | 0.7506 | 0.1311 |
0.6445 | 19.0 | 285 | 1.9605 | 0.2136 | 0.7292 | 0.1262 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1