pszemraj's picture
Update README.md
b022ce9
|
raw
history blame
2.14 kB
metadata
license: bsd-3-clause
base_model: pszemraj/pegasus-x-large-book-summary
tags:
  - generated_from_trainer
  - synthsumm
metrics:
  - rouge
datasets:
  - pszemraj/synthsumm
pipeline_tag: summarization
language:
  - en

pegasus-x-large-book_synthsumm

Fine-tuned on a synthetic dataset of curated long-context text and GPT-3.5-turbo-1106 summaries spanning multiple domains, including "random" long-context examples from redpajama, the pile, etc.

Try it out in the gradio demo

Model description

This model is a fine-tuned version of pszemraj/pegasus-x-large-book-summary on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5481
  • Rouge1: 48.141
  • Rouge2: 19.1137
  • Rougel: 33.647
  • Rougelsum: 42.1211
  • Gen Len: 73.9846

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 5309
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: inverse_sqrt
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.7369 0.38 125 1.7140 43.0265 15.8613 30.5774 38.2507 77.0462
1.7736 0.77 250 1.6361 43.0209 15.2384 29.7678 37.4955 67.6
1.4251 1.15 375 1.5931 46.2138 17.5559 33.0091 41.0385 74.1077
1.2706 1.54 500 1.5635 44.6382 16.5917 30.7551 39.8466 71.7231
1.4844 1.92 625 1.5481 48.141 19.1137 33.647 42.1211 73.9846

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.0
  • Datasets 2.15.0
  • Tokenizers 0.15.0