--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: gardner/TinyLlama-1.1B-Instruct-3T model-index: - name: TinyLlama-1.1B-SlimOrca-LoRA results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: gardner/TinyLlama-1.1B-Instruct-3T model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_llama_derived_model: true load_in_8bit: true load_in_4bit: false strict: false datasets: - path: Open-Orca/SlimOrca-Dedup type: sharegpt split: train dataset_prepared_path: ./dsprepare/Open-Orca/SlimOrca-Dedup val_set_size: 0.05 output_dir: ./tinyllama-1.1b-slimorca-lora hub_model_id: gardner/TinyLlama-1.1B-SlimOrca-LoRA sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: tinyllama wandb_entity: gardner wandb_name: tinyllama-slimorca gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ```

# TinyLlama-1.1B-SlimOrca-LoRA This model is a fine-tuned version of [gardner/TinyLlama-1.1B-Instruct-3T](https://huggingface.co/gardner/TinyLlama-1.1B-Instruct-3T) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2902 | 0.0 | 1 | 0.9116 | | 1.0653 | 0.25 | 1126 | 0.6458 | | 1.0279 | 0.5 | 2252 | 0.6187 | | 0.8918 | 0.75 | 3378 | 0.6042 | | 0.9362 | 1.0 | 4504 | 0.5924 | | 0.8138 | 1.23 | 5630 | 0.5863 | | 0.9669 | 1.48 | 6756 | 0.5814 | | 1.019 | 1.73 | 7882 | 0.5742 | | 0.9232 | 1.98 | 9008 | 0.5695 | | 0.8507 | 2.22 | 10134 | 0.5700 | | 0.7542 | 2.47 | 11260 | 0.5662 | | 0.8325 | 2.72 | 12386 | 0.5639 | | 0.7913 | 2.97 | 13512 | 0.5617 | | 0.8372 | 3.2 | 14638 | 0.5648 | | 0.8984 | 3.45 | 15764 | 0.5638 | | 0.7898 | 3.7 | 16890 | 0.5636 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0