Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: Dans-DiscountModels/Meta-Llama-3.1-8B-ChatML
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

trust_remote_code:

# wandb configuration
wandb_project: l3.1-8b-dans-instruct
wandb_watch:
wandb_run_id:
wandb_log_model: 

# where to save the finished model to
output_dir: ./l3.1-8b-dans-instruct

# dataset settings (local or huggingface repo)
datasets:
  - path: PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
    type: dan-chat
  - path: AquaV/Energetic-Materials-Sharegpt
    type: dan-chat
  - path: AquaV/Chemical-Biological-Safety-Applications-Sharegpt
    type: dan-chat
  - path: AquaV/US-Army-Survival-Sharegpt
    type: dan-chat
  - path: AquaV/Resistance-Sharegpt
    type: dan-chat
  - path: AquaV/Interrogation-Sharegpt
    type: dan-chat
  - path: AquaV/Multi-Environment-Operations-Sharegpt
    type: dan-chat
  - path: PocketDoc/Dans-Mathmaxx
    type: dan-chat
  - path: PocketDoc/Dans-Benchmaxx
    type: dan-chat
  - path: PocketDoc/Dans-Codemaxx
    type: dan-chat
  - path: PocketDoc/Dans-Taskmaxx
    type: dan-chat
  - path: PocketDoc/Dans-Toolmaxx
    type: dan-chat
  - path: PocketDoc/Dans-ASCIIMaxx-Wordart
    type: dan-chat
  - path: PocketDoc/Dans-Prosemaxx-Gutenberg
    type: dan-chat
  - path: PocketDoc/Dans-Prosemaxx-Cowriter
    type: dan-chat
  - path: PocketDoc/Dans-Prosemaxx-Cowriter-S
    type: dan-chat
  - path: PocketDoc/Dans-Prosemaxx-Adventure
    type: dan-chat
  - path: PocketDoc/DansTestYard
    type: completion

chat_template: chatml

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

load_in_8bit: false
load_in_4bit: false
strict: false

dataset_prepared_path: ./l3.1-8b-dans-instruct-data
val_set_size: 0.01

sequence_len: 8192

sample_packing: true
eval_sample_packing: true

pad_to_sequence_len: true

gradient_checkpointing: true

gradient_accumulation_steps: 32
micro_batch_size: 1

num_epochs: 3

optimizer: adamw_torch

lr_scheduler: cosine
learning_rate: 0.0000015
cosine_min_lr_ratio: 

adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 0.00000001
weight_decay: 0.05

train_on_inputs: false
group_by_length: true

bf16: true
fp16: false
tf32: false

early_stopping_patience:

resume_from_checkpoint: 
auto_resume_from_checkpoints: 

local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_ratio: 0.1
evals_per_epoch: 10
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2

debug: false

deepspeed:
fsdp:
fsdp_config:

special_tokens:
  pad_token: <|finetune_right_pad_id|>
  eos_token: <|im_end|>

l3.1-8b-dans-instruct

This model is a fine-tuned version of Dans-DiscountModels/Meta-Llama-3.1-8B-ChatML on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5811

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 97
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.7656 0.0031 1 1.7463
1.7434 0.1009 33 1.7351
1.6519 0.2018 66 1.6830
1.679 0.3027 99 1.6342
1.6336 0.4036 132 1.6168
1.5928 0.5044 165 1.6063
1.6581 0.6053 198 1.5996
1.646 0.7062 231 1.5957
1.6064 0.8071 264 1.5924
1.5328 0.9080 297 1.5899
1.6039 1.0069 330 1.5881
1.6226 1.1080 363 1.5867
1.4879 1.2090 396 1.5855
1.6646 1.3101 429 1.5844
1.5874 1.4112 462 1.5836
1.4901 1.5123 495 1.5830
1.6148 1.6133 528 1.5825
1.3064 1.7144 561 1.5822
1.4952 1.8155 594 1.5817
1.6338 1.9165 627 1.5816
1.7102 2.0156 660 1.5815
1.6408 2.1165 693 1.5813
1.3856 2.2175 726 1.5813
1.537 2.3184 759 1.5813
1.6205 2.4194 792 1.5812
1.7095 2.5203 825 1.5811
1.4987 2.6213 858 1.5811
1.6141 2.7222 891 1.5811
1.5662 2.8232 924 1.5811
1.5975 2.9241 957 1.5811

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Dans-DiscountModels/Dans-Instruct-Mix-8b-ChatML-V0.1.0

Finetuned
(4)
this model
Quantizations
1 model