llama-68m-oasst / README.md
NowaBwagel0's picture
End of training
d8a21aa verified
|
raw
history blame
2.35 kB
metadata
license: other
base_model: NowaBwagel0/llama-68m-oasst
tags:
  - generated_from_trainer
model-index:
  - name: llama-68m-oasst
    results: []

llama-68m-oasst

This model is a fine-tuned version of NowaBwagel0/llama-68m-oasst on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.8987

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 18

Training results

Training Loss Epoch Step Validation Loss
0.97 0.9987 382 3.4996
0.9273 2.0 765 3.5370
0.9176 2.9987 1147 3.5715
0.9004 4.0 1530 3.6086
0.8736 4.9987 1912 3.6379
0.8599 6.0 2295 3.6761
0.7955 6.9987 2677 3.7044
0.7741 8.0 3060 3.7346
0.7364 8.9987 3442 3.7615
0.7605 10.0 3825 3.7855
0.695 10.9987 4207 3.8088
0.7111 12.0 4590 3.8332
0.6849 12.9987 4972 3.8490
0.6862 14.0 5355 3.8659
0.6834 14.9987 5737 3.8785
0.6541 16.0 6120 3.8898
0.646 16.9987 6502 3.8961
0.6777 17.9765 6876 3.8987

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.2+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1