|
--- |
|
license: apache-2.0 |
|
base_model: facebook/bart-large |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- clupubhealth |
|
metrics: |
|
- rouge |
|
model-index: |
|
- name: bart-pubhealth-expanded-hi-grad |
|
results: |
|
- task: |
|
name: Sequence-to-sequence Language Modeling |
|
type: text2text-generation |
|
dataset: |
|
name: clupubhealth |
|
type: clupubhealth |
|
config: expanded |
|
split: test |
|
args: expanded |
|
metrics: |
|
- name: Rouge1 |
|
type: rouge |
|
value: 30.2592 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# bart-pubhealth-expanded-hi-grad |
|
|
|
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the clupubhealth dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 2.0581 |
|
- Rouge1: 30.2592 |
|
- Rouge2: 11.7027 |
|
- Rougel: 24.1706 |
|
- Rougelsum: 24.3596 |
|
- Gen Len: 19.95 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 950 |
|
- total_train_batch_size: 15200 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 10 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |
|
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| |
|
| 3.7893 | 0.49 | 2 | 2.3943 | 20.5187 | 5.4764 | 15.9378 | 16.2797 | 20.0 | |
|
| 3.4045 | 0.98 | 4 | 2.1599 | 24.0858 | 7.8207 | 19.0412 | 19.1609 | 19.88 | |
|
| 3.2488 | 1.47 | 6 | 2.1026 | 27.3466 | 9.369 | 21.1419 | 21.3136 | 19.865 | |
|
| 3.1823 | 1.96 | 8 | 2.1324 | 28.825 | 9.6007 | 22.0963 | 22.3776 | 19.82 | |
|
| 3.1263 | 2.44 | 10 | 2.1105 | 29.2694 | 10.5001 | 23.2842 | 23.5473 | 19.85 | |
|
| 3.0834 | 2.93 | 12 | 2.0837 | 28.5975 | 10.2016 | 22.048 | 22.1341 | 19.915 | |
|
| 3.0283 | 3.42 | 14 | 2.0773 | 28.5813 | 10.447 | 22.7456 | 22.8496 | 19.91 | |
|
| 3.0301 | 3.91 | 16 | 2.0730 | 30.1049 | 11.4375 | 24.083 | 24.3045 | 19.945 | |
|
| 2.9851 | 4.4 | 18 | 2.0775 | 29.2224 | 10.2722 | 22.7019 | 23.0038 | 19.95 | |
|
| 2.9769 | 4.89 | 20 | 2.0777 | 29.6981 | 10.7044 | 23.2487 | 23.5232 | 19.96 | |
|
| 2.9623 | 5.38 | 22 | 2.0711 | 29.0438 | 10.5105 | 23.1751 | 23.415 | 19.92 | |
|
| 2.9421 | 5.87 | 24 | 2.0676 | 29.096 | 10.6599 | 23.1381 | 23.3765 | 19.985 | |
|
| 2.9234 | 6.36 | 26 | 2.0646 | 29.6561 | 10.9096 | 23.2384 | 23.4265 | 19.985 | |
|
| 2.9107 | 6.85 | 28 | 2.0616 | 29.7134 | 11.1686 | 23.272 | 23.4475 | 19.985 | |
|
| 2.9077 | 7.33 | 30 | 2.0593 | 29.5055 | 11.0256 | 23.4406 | 23.6653 | 19.955 | |
|
| 2.9072 | 7.82 | 32 | 2.0585 | 30.0504 | 11.433 | 23.9176 | 24.1728 | 19.95 | |
|
| 2.8951 | 8.31 | 34 | 2.0583 | 29.9401 | 11.602 | 23.948 | 24.1323 | 19.95 | |
|
| 2.8955 | 8.8 | 36 | 2.0584 | 30.1158 | 11.4745 | 24.0509 | 24.2465 | 19.94 | |
|
| 2.8774 | 9.29 | 38 | 2.0582 | 30.0476 | 11.4465 | 23.8956 | 24.0527 | 19.945 | |
|
| 2.8851 | 9.78 | 40 | 2.0581 | 30.2592 | 11.7027 | 24.1706 | 24.3596 | 19.95 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.31.0 |
|
- Pytorch 2.0.1+cu117 |
|
- Datasets 2.7.1 |
|
- Tokenizers 0.13.2 |
|
|