zephyr-7b-dpo-full-ultrabin-high-margin-3-epochs
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:
- Loss: 0.5894
- Rewards/chosen: -2.9914
- Rewards/rejected: -4.9379
- Rewards/accuracies: 0.7578
- Rewards/margins: 1.9464
- Logps/rejected: -756.4492
- Logps/chosen: -561.7738
- Logits/rejected: 3.7803
- Logits/chosen: 2.4993
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.5611 | 0.3484 | 50 | 0.6184 | -0.0925 | -0.3566 | 0.6875 | 0.2640 | -298.3184 | -271.8824 | -2.5030 | -2.5443 |
0.3114 | 0.6969 | 100 | 0.5684 | -1.2749 | -2.2355 | 0.7188 | 0.9606 | -486.2125 | -390.1244 | 1.4736 | 0.8929 |
0.2115 | 1.0453 | 150 | 0.5424 | -1.1893 | -2.3764 | 0.7344 | 1.1871 | -500.3030 | -381.5569 | 1.7464 | 0.8516 |
0.1459 | 1.3937 | 200 | 0.5506 | -1.5868 | -2.9488 | 0.7383 | 1.3620 | -557.5460 | -421.3102 | 2.1181 | 1.2033 |
0.155 | 1.7422 | 250 | 0.5421 | -1.7379 | -3.1364 | 0.7422 | 1.3985 | -576.3018 | -436.4162 | 0.6639 | -0.1257 |
0.0778 | 2.0906 | 300 | 0.5661 | -2.2459 | -3.9084 | 0.7578 | 1.6626 | -653.5056 | -487.2183 | 2.4478 | 1.3197 |
0.063 | 2.4390 | 350 | 0.5745 | -2.4511 | -4.2302 | 0.7461 | 1.7791 | -685.6794 | -507.7419 | 3.2009 | 2.0299 |
0.0546 | 2.7875 | 400 | 0.5884 | -2.9614 | -4.9020 | 0.7578 | 1.9406 | -752.8591 | -558.7693 | 3.7820 | 2.5008 |
Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 1
Model tree for sfulay/zephyr-7b-dpo-full-ultrabin-high-margin-3-epochs
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full