Triangle104
commited on
Commit
•
f934ef3
1
Parent(s):
37c8ed5
Update README.md
Browse files
README.md
CHANGED
@@ -108,6 +108,54 @@ model-index:
|
|
108 |
This model was converted to GGUF format from [`flammenai/Mahou-1.5-mistral-nemo-12B`](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
109 |
Refer to the [original model card](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B) for more details on the model.
|
110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
111 |
## Use with llama.cpp
|
112 |
Install llama.cpp through brew (works on Mac and Linux)
|
113 |
|
|
|
108 |
This model was converted to GGUF format from [`flammenai/Mahou-1.5-mistral-nemo-12B`](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
109 |
Refer to the [original model card](https://huggingface.co/flammenai/Mahou-1.5-mistral-nemo-12B) for more details on the model.
|
110 |
|
111 |
+
---
|
112 |
+
Model details:
|
113 |
+
-
|
114 |
+
Mahou-1.5-mistral-nemo-12B
|
115 |
+
|
116 |
+
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
|
117 |
+
Chat Format
|
118 |
+
|
119 |
+
This model has been trained to use ChatML format.
|
120 |
+
|
121 |
+
<|im_start|>system
|
122 |
+
{{system}}<|im_end|>
|
123 |
+
<|im_start|>{{char}}
|
124 |
+
{{message}}<|im_end|>
|
125 |
+
<|im_start|>{{user}}
|
126 |
+
{{message}}<|im_end|>
|
127 |
+
|
128 |
+
Roleplay Format
|
129 |
+
|
130 |
+
Speech without quotes.
|
131 |
+
Actions in *asterisks*
|
132 |
+
|
133 |
+
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
|
134 |
+
|
135 |
+
SillyTavern Settings
|
136 |
+
|
137 |
+
Use ChatML for the Context Template.
|
138 |
+
Enable Instruct Mode.
|
139 |
+
Use the Mahou ChatML Instruct preset.
|
140 |
+
Use the Mahou Sampler preset.
|
141 |
+
|
142 |
+
Method
|
143 |
+
|
144 |
+
ORPO finetuned with 4x H100 for 3 epochs.
|
145 |
+
Open LLM Leaderboard Evaluation Results
|
146 |
+
|
147 |
+
Detailed results can be found here
|
148 |
+
Metric Value
|
149 |
+
Avg. 26.28
|
150 |
+
IFEval (0-Shot) 67.51
|
151 |
+
BBH (3-Shot) 36.26
|
152 |
+
MATH Lvl 5 (4-Shot) 5.06
|
153 |
+
GPQA (0-shot) 3.47
|
154 |
+
MuSR (0-shot) 16.47
|
155 |
+
MMLU-PRO (5-shot) 28.91
|
156 |
+
|
157 |
+
---
|
158 |
+
|
159 |
## Use with llama.cpp
|
160 |
Install llama.cpp through brew (works on Mac and Linux)
|
161 |
|