---
base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
library_name: peft
tags:
- generated_from_trainer
model-index:
- name: outputs/newdataset-out
results: []
---
[](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config
axolotl version: `0.4.1`
```yaml
base_model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: llama3
datasets:
- path: Fischerboot/newnewdataset-sophie
type: sharegpt
conversation: llama3
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./outputs/newdataset-out
adapter: qlora
lora_model_dir:
sequence_len: 128
sample_packing: false
pad_to_sequence_len: true
lora_r: 1024
lora_alpha: 512
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 8
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
eval_sample_packing: false
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|begin_of_text|>"
eos_token: "<|end_of_text|>"
pad_token: "<|end_of_text|>"
```
# outputs/newdataset-out
This model is a fine-tuned version of [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.3499 | 0.0034 | 1 | 6.0611 |
| 1.4549 | 0.2526 | 74 | 1.8669 |
| 0.4942 | 0.5051 | 148 | 0.5161 |
| 0.5932 | 0.7577 | 222 | 1.2850 |
| 0.8581 | 1.0102 | 296 | 0.7266 |
| 1.1222 | 1.2628 | 370 | 0.3729 |
| 0.4354 | 1.5154 | 444 | 0.4699 |
| 0.6122 | 1.7679 | 518 | 0.6806 |
| 0.7419 | 2.0205 | 592 | 0.8912 |
| 2.7271 | 2.2730 | 666 | 1.2924 |
| 0.93 | 2.5256 | 740 | 0.8516 |
| 0.7029 | 2.7782 | 814 | 0.5884 |
| 0.5606 | 3.0307 | 888 | 0.5291 |
| 0.4365 | 3.2833 | 962 | 0.8004 |
| 0.2466 | 3.5358 | 1036 | 0.3922 |
| 0.6039 | 3.7884 | 1110 | 0.3917 |
| 0.1796 | 4.0410 | 1184 | 0.3216 |
| 0.3061 | 4.2935 | 1258 | 0.4309 |
| 0.7083 | 4.5461 | 1332 | 0.4010 |
| 0.3891 | 4.7986 | 1406 | 0.3268 |
| 0.331 | 5.0512 | 1480 | 0.3360 |
| 0.3014 | 5.3038 | 1554 | 0.2963 |
| 0.125 | 5.5563 | 1628 | 0.3096 |
| 0.3207 | 5.8089 | 1702 | 0.3020 |
| 0.2809 | 6.0614 | 1776 | 0.2849 |
| 1.5804 | 6.3140 | 1850 | 0.2801 |
| 0.4681 | 6.5666 | 1924 | 0.2826 |
| 0.2527 | 6.8191 | 1998 | 0.2793 |
| 0.2207 | 7.0717 | 2072 | 0.2787 |
| 0.2498 | 7.3242 | 2146 | 0.2799 |
| 0.1927 | 7.5768 | 2220 | 0.2798 |
| 0.415 | 7.8294 | 2294 | 0.2792 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1