datasets:
- HuggingFaceH4/no_robots
language:
- en
license: cc-by-nc-4.0
Good Robot 2 🤖
The model "Good Robot" had one simple goal in mind: to be a good instruction-following model that doesn't talk like ChatGPT.
Built upon the Mistral 7b 0.2 base, this model aims to provide responses that are as human-like as possible, thanks to some DPO training using the (for now, private) minerva-ai/yes-robots-dpo
dataset.
HuggingFaceH4/no-robots was used as the base for generating a custom dataset to create DPO pairs.
It should follow instructions and be generally as smart as a typical Mistral model - just not as soulless and full of GPT slop.
Changes from the original good-robot model:
- Mistral 7b-0.2 base (32k native context, no SWA)
- ChatML prompt format
- Trained using GaLore method
Prompt Format:
ChatML
<|im_start|>system
System message
<|im_start|>user
User message<|im_end|>
<|im_start|>assistant
Credits:
Model made in collaboration with Gryphe.
Training Data:
- HuggingFaceH4/no_robots
- MinervaAI/yes-robots-dpo
- private datasets with common GPTisms
Limitations:
While I did my best to minimize GPTisms, no model is perfect, and there may still be instances where the generated content has GPT's common phrases - I have a suspicion that's due to them being engrained into Mistral model itself.
License:
cc-by-nc-4.0