Edit model card

flan-t5-large-finetuned-scope-summarization

This model is a fine-tuned version of google/flan-t5-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1195
  • Rouge1: 24.038
  • Rouge2: 21.4448
  • Rougel: 23.6448
  • Rougelsum: 23.7376

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.3649 1.0 158 0.2625 19.5356 12.4535 16.8939 17.0876
0.2674 2.0 316 0.2422 19.7836 12.4864 16.9298 16.9928
0.2516 3.0 474 0.2271 20.4584 13.593 17.9404 18.0498
0.2407 4.0 632 0.2178 20.2729 13.6717 17.5 17.6375
0.2304 5.0 790 0.2087 20.3933 14.4275 17.9315 18.0607
0.2213 6.0 948 0.1969 21.4659 16.1078 19.4775 19.5604
0.2134 7.0 1106 0.1863 23.3097 19.0603 21.9919 22.1651
0.2069 8.0 1264 0.1803 22.5866 17.3665 20.4585 20.4009
0.2 9.0 1422 0.1695 23.7295 19.7783 22.4861 22.5794
0.1942 10.0 1580 0.1632 21.9543 16.572 19.539 19.5863
0.1883 11.0 1738 0.1570 22.5164 18.8651 21.4345 21.6252
0.1829 12.0 1896 0.1495 23.7871 20.6331 23.2495 23.4011
0.178 13.0 2054 0.1425 23.789 21.1006 23.2292 23.4225
0.1738 14.0 2212 0.1386 23.8972 21.2393 23.4578 23.5827
0.1689 15.0 2370 0.1331 23.801 21.2013 23.3414 23.4499
0.1654 16.0 2528 0.1286 24.1973 21.5666 23.7563 23.9153
0.1629 17.0 2686 0.1257 23.8243 21.2713 23.4043 23.4941
0.16 18.0 2844 0.1229 23.9496 21.3888 23.4687 23.6047
0.1578 19.0 3002 0.1208 24.009 21.4585 23.5252 23.646
0.156 20.0 3160 0.1195 24.038 21.4448 23.6448 23.7376

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
28
Safetensors
Model size
783M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nandavikas16/flan-t5-large-finetuned-scope-summarization

Finetuned
(102)
this model