metadata
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: DUAL-GPO/zephyr-7b-dpo-new-lora-v1-merged
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-dpo-0k-15k-i1
results: []
license: apache-2.0
zephyr-7b-dpo-0k-15k-i1
This model is a fine-tuned version of DUAL-GPO/zephyr-7b-dpo-new-lora-v1-merged on the HuggingFaceH4/ultrafeedback_binarized dataset.