--- license: other base_model: NowaBwagel0/llama-68m-oasst tags: - generated_from_trainer model-index: - name: llama-68m-oasst results: [] --- # llama-68m-oasst This model is a fine-tuned version of [NowaBwagel0/llama-68m-oasst](https://huggingface.co/NowaBwagel0/llama-68m-oasst) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8024 | 1.0 | 382 | 2.7493 | | 2.6581 | 2.0 | 765 | 2.6798 | | 2.6276 | 3.0 | 1147 | 2.6429 | | 2.5111 | 4.0 | 1530 | 2.6212 | | 2.4614 | 5.0 | 1912 | 2.6069 | | 2.4789 | 6.0 | 2295 | 2.5985 | | 2.4288 | 7.0 | 2677 | 2.5942 | | 2.4184 | 8.0 | 3060 | 2.5909 | | 2.2978 | 8.99 | 3438 | 2.5908 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0