Edit model card

zephyr-7b-dpo-0k-15k-i1

This model is a fine-tuned version of DUAL-GPO/zephyr-7b-dpo-new-lora-v1-merged on the HuggingFaceH4/ultrafeedback_binarized dataset.

Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for DUAL-GPO/zephyr-7b-dpo-0k-15k-i1-merged

Adapter
(4)
this model
Adapters
1 model

Dataset used to train DUAL-GPO/zephyr-7b-dpo-0k-15k-i1-merged