Text Generation
Safetensors
9 languages
mistral
conversational
File size: 3,098 Bytes
485c7b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2fb66c7
485c7b5
 
 
106fe11
 
 
485c7b5
cacfaa0
b354d94
485c7b5
8982819
485c7b5
cacfaa0
4674046
 
2a8b02a
 
d5ddf63
a4ee461
485c7b5
dd108ef
94c0a74
 
5ac31ff
dd108ef
98c2c5b
 
 
485c7b5
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: apache-2.0
datasets:
- Epiculous/SynthRP-Gens-v1-Filtered-n-Cleaned
- Epiculous/Synthstruct-Gens-v1-Filtered-n-Cleaned
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
pipeline_tag: text-generation
---


![image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/ijVNJF9HePkQCjejXZLcI.png)

Back from the dead! Hoping to make something cool to share with everyone! Introducing Crimson Dawn! Built atop the impressive [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407); Crimson Dawn was built with the idea that AI should not be a boring bland generic assistant, but something that you can connect with on a more personal level. Something that can be interesting in a Roleplay, but useful as an assistant too.

# Quants!
[exl2](https://huggingface.co/lucyknada/Epiculous_Crimson_Dawn-V0.1-exl2) / [gguf](https://huggingface.co/mradermacher/Crimson_Dawn-V0.1-GGUF)

## Prompting
Crimson Dawn was trained with the Mistral Instruct template, therefore it should be prompted in a similar way that you would prompt any other mistral based model.
If you are using GGUF I strongly advise using ChatML, for some reason that quantization performs better using ChatML.
```
"<s>[INST] Prompt goes here [/INST]<\s>"
```
### Context and Instruct
[Magnum-123B-Context.json](https://files.catbox.moe/rkyqwg.json) <br/>
[Magnum-123B-Instruct.json](https://files.catbox.moe/obb5oe.json) <br/>
~~[Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json)~~<br/>
~~[Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json)~~ <br/>
*** NOTE *** <br/>
There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.

### Current Top Sampler Settings
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json)- Considered the best settings! <br/>
[Crimson_Dawn-Nitral-Special](https://files.catbox.moe/8xjxht.json) <br/>
[Crimson_Dawn-Magnum-Style](https://files.catbox.moe/lc59dn.json) 

### Tokenizer
If you are using SillyTavern, please set the tokenizer to API (WebUI/ koboldcpp)

## Training
Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on RP data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on instruct, and the new instruct LoRA was applied to the modified base, resulting in what you see here.

[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)

## Special Thanks
Special thanks to my friends over at Anthracite! Without their help and Kalomaze starting the synthetic data script, none of this would have been possible.
Also want to thank my friends in The Chaotic Neutrals for their friendship, support, and guidance.