Edit model card

t5-small-matthewKP

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3587
  • Rouge1: 50.9483
  • Rouge2: 33.6216
  • Rougel: 50.8374
  • Rougelsum: 50.8405
  • Gen Len: 7.2358

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.1223 1.0 6162 1.3958 48.7571 32.9322 48.7201 48.6343 7.4432
1.0032 2.0 12324 1.3587 50.9483 33.6216 50.8374 50.8405 7.2358
0.9138 3.0 18486 1.4147 49.5295 30.0768 49.4184 49.4152 6.9196
0.8711 4.0 24648 1.3923 51.8423 33.5555 51.7412 51.706 6.9401
0.8407 5.0 30810 1.4422 50.9414 32.3617 50.8776 50.8806 6.9788
0.7328 6.0 36972 1.4904 50.7542 31.7725 50.6444 50.6829 6.9547
0.7564 7.0 43134 1.5097 49.922 30.9948 49.8255 49.8403 7.0006
0.7292 8.0 49296 1.5037 50.5598 31.1728 50.4433 50.4861 6.8773

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
5
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rizvi-rahil786/t5-small-matthewKP

Base model

google-t5/t5-small
Finetuned
(1512)
this model