File size: 1,318 Bytes
d740b72
 
21a9fc0
d740b72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
base_model:
- flammenai/Mahou-1.5-mistral-nemo-12B
datasets:
- flammenai/MahouMix-v1
library_name: transformers
license: apache-2.0
tags:
- autoquant
- gguf
---
![image/png](https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png)

# Mahou-1.5-mistral-nemo-12B

Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.

### Chat Format

This model has been trained to use ChatML format.

```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```

### Roleplay Format

- Speech without quotes.
- Actions in `*asterisks*`

```
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
```

### SillyTavern Settings

1. Use ChatML for the Context Template.
2. Enable Instruct Mode.
3. Use the [Mahou ChatML Instruct preset](https://huggingface.co/datasets/flammenai/Mahou-ST-ChatML-Instruct/raw/main/Mahou.json).
4. Use the [Mahou Sampler preset](https://huggingface.co/datasets/flammenai/Mahou-ST-Sampler-Preset/raw/main/Mahou.json).

### Method

[ORPO finetuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 4x H100 for 3 epochs.