Edit model card

t5_recommendation_jobs

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5136
  • Rouge1: 61.4539
  • Rouge2: 35.8407
  • Rougel: 61.0072
  • Rougelsum: 61.0251
  • Gen Len: 4.0796

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 0.97 8 0.4263 59.6302 33.5251 59.0023 59.1277 4.0973
No log 1.94 16 0.4339 59.1603 34.6165 58.6462 58.6462 4.0796
No log 2.91 24 0.4536 58.6452 35.4130 58.0788 58.0080 4.1593
No log 4.0 33 0.4584 59.7040 35.4130 59.3226 59.1646 4.0531
No log 4.97 41 0.4627 61.6962 38.1121 61.3938 61.2684 4.0531
No log 5.94 49 0.4677 61.3496 36.9027 60.8776 60.8175 4.0
No log 6.91 57 0.4716 60.6511 35.8997 59.9610 59.9758 4.0885
No log 8.0 66 0.4925 60.4003 34.9558 60.0000 59.9779 4.0177
No log 8.97 74 0.4905 57.9340 32.9499 57.5432 57.6117 4.0265
No log 9.94 82 0.4951 60.5120 35.7965 59.7777 59.9842 4.1062
No log 10.91 90 0.5053 61.3885 37.2566 60.9166 61.0862 4.0973
No log 12.0 99 0.5131 61.1473 35.6637 60.3666 60.4867 4.1593
No log 12.97 107 0.5180 59.8736 33.5398 59.2225 59.2162 4.1062
No log 13.94 115 0.5224 61.8163 36.6667 61.3812 61.4138 4.0708
No log 14.55 120 0.5136 61.4539 35.8407 61.0072 61.0251 4.0796

Framework versions

  • Transformers 4.27.0
  • Pytorch 2.1.2
  • Datasets 2.8.0
  • Tokenizers 0.13.3
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.