metadata
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-finetuned-FOMC
results: []
flan-t5-base-finetuned-FOMC
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.2508
- Rouge1: 33.4635
- Rouge2: 20.2223
- Rougel: 30.0686
- Rougelsum: 30.5667
- Gen Len: 19.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 10 | 2.6211 | 30.3857 | 18.7991 | 27.6215 | 28.1883 | 19.0 |
No log | 2.0 | 20 | 2.5670 | 30.484 | 18.1632 | 27.4386 | 28.171 | 19.0 |
No log | 3.0 | 30 | 2.5087 | 29.9232 | 17.615 | 26.8812 | 27.6851 | 19.0 |
No log | 4.0 | 40 | 2.4691 | 29.9232 | 17.615 | 26.8812 | 27.6851 | 19.0 |
No log | 5.0 | 50 | 2.4427 | 30.3286 | 17.7647 | 27.3411 | 28.085 | 19.0 |
No log | 6.0 | 60 | 2.4165 | 30.7106 | 18.153 | 27.7262 | 28.3967 | 19.0 |
No log | 7.0 | 70 | 2.3916 | 31.0342 | 18.6915 | 28.0288 | 28.757 | 19.0 |
No log | 8.0 | 80 | 2.3753 | 31.0342 | 18.6915 | 28.0288 | 28.757 | 19.0 |
No log | 9.0 | 90 | 2.3556 | 31.0342 | 18.6915 | 28.0288 | 28.757 | 19.0 |
No log | 10.0 | 100 | 2.3410 | 32.0795 | 19.3562 | 29.0882 | 29.5601 | 19.0 |
No log | 11.0 | 110 | 2.3272 | 32.0795 | 19.3562 | 29.0882 | 29.5601 | 19.0 |
No log | 12.0 | 120 | 2.3180 | 32.5188 | 19.5364 | 29.4727 | 29.6928 | 19.0 |
No log | 13.0 | 130 | 2.3097 | 32.5188 | 19.5364 | 29.4727 | 29.4824 | 19.0 |
No log | 14.0 | 140 | 2.3008 | 32.5188 | 19.5364 | 29.4727 | 29.4824 | 19.0 |
No log | 15.0 | 150 | 2.2947 | 32.5188 | 19.5364 | 29.4727 | 29.4824 | 19.0 |
No log | 16.0 | 160 | 2.2861 | 32.5188 | 19.5364 | 29.4727 | 29.4824 | 19.0 |
No log | 17.0 | 170 | 2.2791 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 18.0 | 180 | 2.2755 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 19.0 | 190 | 2.2712 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 20.0 | 200 | 2.2661 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 21.0 | 210 | 2.2623 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 22.0 | 220 | 2.2602 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 23.0 | 230 | 2.2588 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 24.0 | 240 | 2.2561 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 25.0 | 250 | 2.2545 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 26.0 | 260 | 2.2532 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 27.0 | 270 | 2.2526 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 28.0 | 280 | 2.2518 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 29.0 | 290 | 2.2509 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
No log | 30.0 | 300 | 2.2508 | 33.4635 | 20.2223 | 30.0686 | 30.5667 | 19.0 |
Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3