Faro-Yi-9B-DPO / README.md
theIndividual's picture
Upload folder using huggingface_hub
46bafe7 verified
metadata
license: mit
base_model: wenbopan/Faro-Yi-9B
tags:
  - generated_from_trainer
model-index:
  - name: results/Faro-Yi-9B-DPO
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: wenbopan/Faro-Yi-9B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

is_llama_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

rl: dpo
datasets:
  - path: theIndividual/UltraInteractPair_axolotl
    split: train
    type: chatml

val_set_size: 0.1
output_dir: results/Faro-Yi-9B-DPO

sequence_len: 4096
sample_packing: false
pad_to_sequence_len: false

adapter: lora
lora_model_dir:

lora_r: 16
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_modules_to_save:
lora_fan_in_fan_out:
lora_target_modules:
  - k_proj
  - gate_proj
  - v_proj
  - up_proj
  - q_proj
  - o_proj
  - down_proj

wandb_project: faro-yi-dpo
wandb_entity:
wandb_name:

gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 1
optimizer: paged_adamw_8bit
adam_beta2: 0.95
adam_epsilion: 0.00001
lr_scheduler: linear
learning_rate: 1e-6

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
gradient_checkpoint_kwargs:
  use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
eval_steps:
eval_table_size:
eval_table_max_new_tokens: 128
save_steps: 45
debug:
deepspeed:
weight_decay: 0.1
special_tokens:
save_safetensors: true

dataloader_num_workers: 16
dataloader_pin_memory: true

results/Faro-Yi-9B-DPO

This model is a fine-tuned version of wenbopan/Faro-Yi-9B on an unknown dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10
  • training_steps: 109761

Training results

Framework versions

  • Transformers 4.40.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.0