ldldld's picture
ai-maker-space/llama381binstruct_summarize_short
014badd verified
metadata
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
datasets:
  - generator
library_name: peft
license: llama3.1
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: llama381binstruct_summarize_short
    results: []

llama381binstruct_summarize_short

This model is a fine-tuned version of NousResearch/Meta-Llama-3.1-8B-Instruct on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3076

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 30
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
0.146 1.4706 25 2.0147
0.1176 2.9412 50 1.8274
0.0611 4.4118 75 1.8771
0.0417 5.8824 100 1.9553
0.0388 7.3529 125 1.8213
0.0209 8.8235 150 2.0744
0.0198 10.2941 175 2.0470
0.0103 11.7647 200 2.1113
0.0089 13.2353 225 2.0668
0.0062 14.7059 250 2.0936
0.0082 16.1765 275 2.0592
0.0044 17.6471 300 2.1819
0.0025 19.1176 325 2.2406
0.0021 20.5882 350 2.2534
0.0021 22.0588 375 2.2745
0.0018 23.5294 400 2.2877
0.0018 25.0 425 2.2974
0.0016 26.4706 450 2.3034
0.0017 27.9412 475 2.3066
0.0016 29.4118 500 2.3076

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1