QuantFactory/Mahou-1.5-mistral-nemo-12B-GGUF
This is quantized version of flammenai/Mahou-1.5-mistral-nemo-12B created using llama.cpp
Original Model Card
Mahou-1.5-mistral-nemo-12B
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
Chat Format
This model has been trained to use ChatML format.
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
Roleplay Format
- Speech without quotes.
- Actions in
*asterisks*
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
SillyTavern Settings
- Use ChatML for the Context Template.
- Enable Instruct Mode.
- Use the Mahou ChatML Instruct preset.
- Use the Mahou Sampler preset.
Method
ORPO finetuned with 4x H100 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 26.28 |
IFEval (0-Shot) | 67.51 |
BBH (3-Shot) | 36.26 |
MATH Lvl 5 (4-Shot) | 5.06 |
GPQA (0-shot) | 3.47 |
MuSR (0-shot) | 16.47 |
MMLU-PRO (5-shot) | 28.91 |
- Downloads last month
- 718
Model tree for QuantFactory/Mahou-1.5-mistral-nemo-12B-GGUF
Base model
winglian/m12b-20240721-test010
Finetuned
flammenai/Flammades-Mistral-Nemo-12B
Dataset used to train QuantFactory/Mahou-1.5-mistral-nemo-12B-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard67.510
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard36.260
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard5.060
- acc_norm on GPQA (0-shot)Open LLM Leaderboard3.470
- acc_norm on MuSR (0-shot)Open LLM Leaderboard16.470
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard28.910