CoCoRooXin's picture
./lora_adapter_bart_on_eu
7348013 verified
|
raw
history blame
3.46 kB
metadata
license: mit
library_name: peft
tags:
  - generated_from_trainer
base_model: facebook/bart-large-mnli
metrics:
  - f1
  - precision
  - recall
  - accuracy
model-index:
  - name: finetuned_bart
    results: []

finetuned_bart

This model is a fine-tuned version of facebook/bart-large-mnli on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0620
  • F1: 0.9236
  • Precision: 0.9000
  • Recall: 0.9485
  • Accuracy: 0.9216

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss F1 Precision Recall Accuracy
0.0856 0.0933 50 0.0695 0.9122 0.9010 0.9238 0.9111
0.0593 0.1866 100 0.0685 0.9152 0.8970 0.9341 0.9135
0.0572 0.2799 150 0.0681 0.9149 0.8997 0.9306 0.9135
0.0549 0.3731 200 0.0679 0.9150 0.9054 0.9249 0.9141
0.0529 0.4664 250 0.0678 0.9174 0.9043 0.9308 0.9162
0.0776 0.5597 300 0.0673 0.9158 0.9079 0.9238 0.9151
0.0799 0.6530 350 0.0647 0.9201 0.8964 0.9450 0.9179
0.0806 0.7463 400 0.0647 0.9196 0.8968 0.9436 0.9175
0.0781 0.8396 450 0.0635 0.9193 0.8982 0.9415 0.9174
0.0771 0.9328 500 0.0633 0.9189 0.9019 0.9366 0.9174
0.0787 1.0261 550 0.0629 0.9202 0.8994 0.9420 0.9184
0.0737 1.1194 600 0.0627 0.9210 0.8989 0.9442 0.9190
0.0722 1.2127 650 0.0634 0.9212 0.8981 0.9455 0.9192
0.0684 1.3060 700 0.0630 0.9217 0.9065 0.9374 0.9204
0.0655 1.3993 750 0.0629 0.9228 0.8974 0.9496 0.9205
0.0739 1.4925 800 0.0625 0.9229 0.8993 0.9477 0.9208
0.0666 1.5858 850 0.0625 0.9233 0.8962 0.9521 0.9209
0.0703 1.6791 900 0.0621 0.9238 0.9001 0.9488 0.9218
0.0738 1.7724 950 0.0617 0.9227 0.9007 0.9458 0.9208
0.068 1.8657 1000 0.0620 0.9233 0.9002 0.9477 0.9213
0.069 1.9590 1050 0.0620 0.9236 0.9000 0.9485 0.9216

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1