Edit model card

t5_recommendation_jobs_skills

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4429
  • Rouge1: 52.6616
  • Rouge2: 30.0723
  • Rougel: 52.5572
  • Rougelsum: 52.6440
  • Gen Len: 3.8132

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.01
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 187 0.5664 38.3484 15.8800 38.3864 38.3164 3.6202
No log 2.0 375 0.5174 43.9418 21.8366 43.8970 43.9036 3.5222
0.9323 3.0 562 0.4944 46.3290 24.6659 46.2111 46.2578 3.6591
0.9323 4.0 750 0.4788 47.5516 24.5615 47.4347 47.4707 3.6833
0.9323 5.0 937 0.4788 48.2406 25.5062 48.1735 48.2161 3.6553
0.4409 6.0 1125 0.4614 49.5737 27.1738 49.4533 49.5766 3.6802
0.4409 7.0 1312 0.4610 50.6072 27.7939 50.4005 50.5340 3.7175
0.3878 8.0 1500 0.4523 51.0302 28.5195 50.9143 50.9516 3.6693
0.3878 9.0 1687 0.4474 51.6087 29.4035 51.4667 51.5390 3.7105
0.3878 10.0 1875 0.4488 52.0192 29.9305 51.8988 51.9678 3.8031
0.3437 11.0 2062 0.4468 52.1859 29.5148 52.1171 52.2237 3.7136
0.3437 12.0 2250 0.4438 51.8951 28.7655 51.8052 51.8384 3.7813
0.3437 13.0 2437 0.4466 52.0524 29.4990 51.9942 52.0485 3.7198
0.3156 14.0 2625 0.4443 52.2304 29.5992 52.1425 52.2578 3.6903
0.3156 14.96 2805 0.4429 52.6616 30.0723 52.5572 52.6440 3.8132

Framework versions

  • Transformers 4.27.0
  • Pytorch 2.1.2
  • Datasets 2.8.0
  • Tokenizers 0.13.3
Downloads last month
31
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.