Edit model card

t5_recommendation_jobs3

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7154
  • Rouge1: 53.3020
  • Rouge2: 31.8649
  • Rougel: 52.6180
  • Rougelsum: 52.6507
  • Gen Len: 4.2934

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 0.99 93 0.7607 47.7139 25.7252 47.3810 47.4008 3.9914
No log 1.99 187 0.7516 49.1554 27.2693 48.5465 48.5321 4.2381
No log 3.0 281 0.7454 49.6795 27.8710 49.2537 49.2633 4.1665
No log 4.0 375 0.7407 49.8898 27.7613 49.4210 49.4315 4.1331
No log 4.99 468 0.7360 51.3330 29.6585 50.9846 51.0159 4.0724
0.6327 5.99 562 0.7222 50.9951 29.7573 50.6261 50.6555 4.1354
0.6327 7.0 656 0.7175 51.8101 30.5342 51.3743 51.3883 4.0903
0.6327 8.0 750 0.7122 51.9497 30.8316 51.4403 51.4551 4.2553
0.6327 8.99 843 0.7144 52.3842 30.7131 51.8160 51.8629 4.1883
0.6327 9.99 937 0.7134 52.4103 31.1474 51.8047 51.8294 4.2903
0.5576 11.0 1031 0.7125 52.8364 31.2692 52.1248 52.1554 4.3261
0.5576 12.0 1125 0.7093 52.7446 30.9128 52.0864 52.1538 4.4202
0.5576 12.99 1218 0.7104 52.9125 31.4285 52.2397 52.2962 4.2918
0.5576 13.99 1312 0.7127 53.4228 32.2228 52.6175 52.6691 4.2265
0.5576 14.88 1395 0.7154 53.3020 31.8649 52.6180 52.6507 4.2934

Framework versions

  • Transformers 4.27.0
  • Pytorch 2.1.2
  • Datasets 2.8.0
  • Tokenizers 0.13.3
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.