metadata
base_model: alignment-handbook/zephyr-7b-sft-full
library_name: peft
license: apache-2.0
tags:
- trl
- dpo
- alignment-handbook
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-lora-r16-20k
results: []
zephyr-7b-dpo-lora-r16-20k
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on an unknown dataset. It achieves the following results on the evaluation set:
- Logits/chosen: -2.5568
- Logits/rejected: -2.5135
- Logps/chosen: -362.1219
- Logps/rejected: -395.5133
- Loss: 0.5370
- Rewards/accuracies: 0.7063
- Rewards/chosen: -0.7888
- Rewards/margins: 0.6860
- Rewards/rejected: -1.4748
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6895 | 0.08 | 100 | -2.8901 | -2.8481 | -282.2447 | -247.7537 | 0.6896 | 0.6627 | 0.0099 | 0.0072 | 0.0028 |
0.653 | 0.16 | 200 | -2.8742 | -2.8339 | -284.5635 | -257.5692 | 0.6569 | 0.6865 | -0.0133 | 0.0821 | -0.0954 |
0.6385 | 0.24 | 300 | -2.8399 | -2.8031 | -310.6566 | -295.5536 | 0.6190 | 0.6905 | -0.2742 | 0.2011 | -0.4752 |
0.5689 | 0.32 | 400 | -2.8437 | -2.8083 | -312.9573 | -305.2159 | 0.6027 | 0.6944 | -0.2972 | 0.2747 | -0.5719 |
0.5689 | 0.4 | 500 | -2.7560 | -2.7152 | -349.3812 | -355.0662 | 0.5750 | 0.7242 | -0.6614 | 0.4089 | -1.0704 |
0.5884 | 0.48 | 600 | -2.6724 | -2.6322 | -352.8877 | -375.1053 | 0.5479 | 0.7123 | -0.6965 | 0.5743 | -1.2708 |
0.5366 | 0.56 | 700 | -2.6541 | -2.6144 | -355.7809 | -381.5439 | 0.5462 | 0.7123 | -0.7254 | 0.6097 | -1.3351 |
0.542 | 0.64 | 800 | -2.6163 | -2.5757 | -352.4363 | -374.8915 | 0.5451 | 0.7262 | -0.6920 | 0.5766 | -1.2686 |
0.5282 | 0.72 | 900 | -2.5716 | -2.5266 | -362.9279 | -390.7825 | 0.5412 | 0.7083 | -0.7969 | 0.6306 | -1.4275 |
0.5873 | 0.8 | 1000 | -2.5693 | -2.5254 | -365.5720 | -399.3072 | 0.5369 | 0.7083 | -0.8233 | 0.6894 | -1.5128 |
0.5152 | 0.88 | 1100 | -2.5620 | -2.5188 | -357.7025 | -389.9855 | 0.5384 | 0.7143 | -0.7446 | 0.6749 | -1.4196 |
0.5213 | 0.96 | 1200 | -2.5568 | -2.5135 | -362.1219 | -395.5133 | 0.5370 | 0.7063 | -0.7888 | 0.6860 | -1.4748 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.1.2+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1