Text Generation
Safetensors
9 languages
mistral
conversational
Epiculous commited on
Commit
5ac31ff
1 Parent(s): eabbc2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -36,8 +36,8 @@ Crimson Dawn was trained with the Mistral Instruct template, therefore it should
36
 
37
 
38
  ### Current Top Sampler Settings
39
- [Crimson_Dawn-Magnum-Style](https://files.catbox.moe/lc59dn.json) <br/>
40
- [Crimson_Dawn-Nitral-Special](https://files.catbox.moe/8xjxht.json)
41
 
42
  ## Training
43
  Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on RP data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on instruct, and the new instruct LoRA was applied to the modified base, resulting in what you see here.
 
36
 
37
 
38
  ### Current Top Sampler Settings
39
+ [Crimson_Dawn-Nitral-Special](https://files.catbox.moe/8xjxht.json) - Considered the best settings! <br/>
40
+ [Crimson_Dawn-Magnum-Style](https://files.catbox.moe/lc59dn.json)
41
 
42
  ## Training
43
  Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on RP data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on instruct, and the new instruct LoRA was applied to the modified base, resulting in what you see here.