flant5-ami
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.6814
- Rouge1: 9.4371
- Rouge2: 3.167
- Rougel: 8.1983
- Rougelsum: 8.9453
- Gen Len: 18.6429
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 0.9811 | 26 | 2.8131 | 7.8042 | 2.6615 | 6.7749 | 7.61 | 18.5357 |
No log | 2.0 | 53 | 2.7404 | 8.6157 | 2.9338 | 7.3561 | 8.3732 | 18.6429 |
No log | 2.9811 | 79 | 2.7073 | 9.793 | 3.4335 | 8.3235 | 9.3152 | 18.6429 |
No log | 4.0 | 106 | 2.6872 | 9.4857 | 3.237 | 8.2132 | 8.9695 | 18.6429 |
No log | 4.9057 | 130 | 2.6814 | 9.4371 | 3.167 | 8.1983 | 8.9453 | 18.6429 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 3
Model tree for Yissuh/flant5-ami
Base model
google/flan-t5-base