Dans-TextSplitter / README.md
PocketDoc's picture
Upload 9 files
f7f2036 verified
|
raw
history blame
4.6 kB
metadata
license: other
base_model: stabilityai/stablelm-2-1_6b
tags:
  - generated_from_trainer
model-index:
  - name: stablelm_1-6b_ContextSplitter
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: stabilityai/stablelm-2-1_6b
base_model_config: stabilityai/stablelm-2-1_6b
model_type: StableLMEpochForCausalLM
tokenizer_type: AutoTokenizer

trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: /run/media/username/Storage/datasets/repo/alpaca/context-aware-splits-english_new.json
    type: alpaca

dataset_prepared_path: stablelm_1-6b_ContextSplitter_data
val_set_size: 0.02
output_dir: ./stablelm_1-6b_ContextSplitter

sequence_len: 4096 
sample_packing: true
pad_to_sequence_len: true

adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:

wandb_project: stablelm_1-6b_ContextSplitter
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.00001

train_on_inputs: false
group_by_length: false
bf16: true 
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true

warmup_steps: 100
evals_per_epoch: 30
eval_table_size:
saves_per_epoch: 4
debug:
deepspeed: #deepspeed_configs/zero2.json # multi-gpu only
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:

stablelm_1-6b_ContextSplitter

This model is a fine-tuned version of stabilityai/stablelm-2-1_6b on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0377

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.1781 0.0 1 0.2283
0.0709 0.03 248 0.0589
0.0274 0.07 496 0.0512
0.0614 0.1 744 0.0480
0.0266 0.13 992 0.0466
0.0471 0.17 1240 0.0440
0.0425 0.2 1488 0.0435
0.1172 0.23 1736 0.0423
0.0322 0.27 1984 0.0415
0.0529 0.3 2232 0.0413
0.0296 0.33 2480 0.0409
0.0357 0.37 2728 0.0398
0.0242 0.4 2976 0.0394
0.0266 0.43 3224 0.0391
0.0292 0.47 3472 0.0386
0.0261 0.5 3720 0.0386
0.0382 0.53 3968 0.0383
0.0378 0.57 4216 0.0383
0.0345 0.6 4464 0.0379
0.0467 0.64 4712 0.0379
0.0542 0.67 4960 0.0378
0.0317 0.7 5208 0.0378
0.0363 0.74 5456 0.0377
0.054 0.77 5704 0.0377
0.0207 0.8 5952 0.0377
0.0302 0.84 6200 0.0377
0.0427 0.87 6448 0.0377
0.0278 0.9 6696 0.0377
0.0648 0.94 6944 0.0377
0.0497 0.97 7192 0.0377

Framework versions

  • Transformers 4.38.0.dev0
  • Pytorch 2.0.1+cu117
  • Datasets 2.15.0
  • Tokenizers 0.15.0