Edit model card

flan-t5-base-finetuned-length_control_token

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0276
  • Sacrebleu: 16.2445

Model description

This model was trained on a dataset called PWKP-GPT3-LENGTH-CONTROL-40BUCKETS. The dataset contains 30k instances taken from PWKP, then processed through GPT3 to obtain simplifications. The 30k instances come from: 10k which were supposed to generate very long simplifications, 10k which were supposed to generate very short simplifications, and 10k without specifying the simplicity level. The model does not sucessfuly work on these buckets. There exists another dataset, the PWKP-GPT3-LENGTH-CONTROL-4BUCKETS, but it was never trained on something. Those buckets are also rather unbalanced.

The idea comes from Controllable Sentence Simplification Louis Martin, https://arxiv.org/pdf/1910.02677.pdf

It was fine-tuned on the FLAN-T5-base model.

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss Sacrebleu
1.3257 1.0 1782 1.0906 15.4208
1.1718 2.0 3564 1.0648 15.5358
1.0972 3.0 5346 1.0484 15.8113
1.0472 4.0 7128 1.0394 16.0159
1.0092 5.0 8910 1.0305 16.1341
0.9858 6.0 10692 1.0276 16.2445

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.