--- license: other base_model: NowaBwagel0/llama-68m-oasst tags: - generated_from_trainer model-index: - name: llama-68m-oasst results: [] --- # llama-68m-oasst This model is a fine-tuned version of [NowaBwagel0/llama-68m-oasst](https://huggingface.co/NowaBwagel0/llama-68m-oasst) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0086 | 0.9987 | 382 | 2.6262 | | 1.9407 | 2.0 | 765 | 2.6339 | | 1.9151 | 2.9987 | 1147 | 2.6486 | | 1.9198 | 4.0 | 1530 | 2.6611 | | 1.86 | 4.9987 | 1912 | 2.6686 | | 1.8415 | 6.0 | 2295 | 2.6788 | | 1.7858 | 6.9987 | 2677 | 2.6880 | | 1.7598 | 8.0 | 3060 | 2.6953 | | 1.7996 | 8.9882 | 3438 | 2.6992 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.2+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1