Edit model card

expected_model_nov11

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1943
  • Rouge1: 72.751
  • Rouge2: 64.531
  • Rougel: 71.7809
  • Rougelsum: 72.5858
  • Gen Len: 16.4797

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
11.5118 0.68 200 0.4990 52.9797 43.7182 52.2591 52.9986 9.6068
0.4597 1.36 400 0.2770 71.5492 62.6473 70.6589 71.4471 16.4237
0.3259 2.03 600 0.2486 72.1475 63.0992 71.3032 72.0859 16.3983
0.273 2.71 800 0.2273 71.9258 63.3664 71.1095 71.7798 16.5339
0.2545 3.39 1000 0.2161 72.3257 63.5931 71.5259 72.3231 16.4322
0.2374 4.07 1200 0.2091 72.3551 63.9109 71.5349 72.2473 16.4746
0.2143 4.75 1400 0.2116 72.3027 63.8027 71.6227 72.221 16.439
0.2161 5.42 1600 0.1991 72.3081 63.7819 71.4337 72.2038 16.4712
0.1987 6.1 1800 0.2039 72.4605 64.0889 71.6023 72.3601 16.4864
0.1942 6.78 2000 0.2020 72.458 63.8879 71.4977 72.3096 16.4424
0.1826 7.46 2200 0.2000 72.2467 63.7052 71.3826 72.0909 16.4288
0.1867 8.14 2400 0.1965 72.417 64.0356 71.5254 72.3042 16.4983
0.1773 8.81 2600 0.1930 72.5715 64.1819 71.6728 72.501 16.4797
0.1875 9.49 2800 0.1943 72.751 64.531 71.7809 72.5858 16.4797

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tanvirsrbd1/expected_model_nov11

Finetuned
(628)
this model