Edit model card

t5-base-finetuned-xsum

This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7758
  • Rouge1: 77.9048
  • Rouge2: 52.4603
  • Rougel: 78.6825
  • Rougelsum: 78.3333
  • Gen Len: 6.6

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 17 2.4750 49.2456 26.8694 48.0467 48.0189 15.2
No log 2.0 34 1.5092 68.1774 45.2201 67.9806 68.0505 10.2
No log 3.0 51 1.1905 73.8611 48.5079 74.3016 74.127 7.5
No log 4.0 68 1.0329 74.1693 46.4048 74.7143 74.2566 7.0
No log 5.0 85 0.9331 73.9841 45.8016 74.5159 74.1905 6.5333
No log 6.0 102 0.8774 74.9841 45.8016 75.4048 75.2222 6.5333
No log 7.0 119 0.8377 78.2487 51.3968 79.0212 78.6825 6.8333
No log 8.0 136 0.8264 76.5714 50.1349 77.3651 77.0159 6.4667
No log 9.0 153 0.8160 76.5714 50.1349 77.3651 77.0159 6.4333
No log 10.0 170 0.7945 78.709 53.4127 79.4974 79.0132 6.6667
No log 11.0 187 0.7846 78.709 53.4127 79.4974 79.0132 6.6667
No log 12.0 204 0.7794 77.9048 52.4603 78.6825 78.3333 6.6
No log 13.0 221 0.7783 77.9048 52.4603 78.6825 78.3333 6.6
No log 14.0 238 0.7764 77.9048 52.4603 78.6825 78.3333 6.6
No log 15.0 255 0.7758 77.9048 52.4603 78.6825 78.3333 6.6

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kranasian/t5-base-finetuned-xsum

Base model

google-t5/t5-base
Finetuned
(382)
this model