About ORPO
Collection
Contains some information and experiments fine-tuning LLMs using 🤗 `trl.ORPOTrainer`
•
8 items
•
Updated
•
5
Stable Diffusion XL "A capybara, a killer whale, and a robot named Ultra being friends"
This is an ORPO fine-tune of mistralai/Mistral-7B-v0.1
with
alvarobartt/dpo-mix-7k-simplified
.
⚠️ Note that the code is still experimental, as the ORPOTrainer
PR is still not merged, follow its progress
at 🤗trl
- ORPOTrainer
PR.
ORPO: Monolithic Preference Optimization without Reference Model
Base model
mistralai/Mistral-7B-v0.1