|
--- |
|
license: cc-by-nc-2.0 |
|
tags: |
|
- not-for-all-audiences |
|
--- |
|
## CAUTION: This model was finetuned on a corpus that includes adult content and may produce mature content without warning. |
|
![](https://files.catbox.moe/1k5ama.jpg) |
|
|
|
# MN-12B-Tarsus |
|
|
|
Is a full-weight finetune of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) |
|
|
|
Which underwent several intermediate steps. |
|
|
|
This finetune was made with chatting/roleplaying via SillyTavern in mind and thus all of the testing was done there, with the goals being to: |
|
|
|
-Reduce shiver-slop |
|
|
|
-Make the model more conversationally proactive |
|
|
|
-Give it a more human-like output (i.e. less gratuitous purple prose) |
|
|
|
-Reducing overall positivity bias |
|
|
|
It still responds well to Mistral-Instruct formatting. |
|
|
|
The results are imperfect and its assistant capabilities suffered somewhat as a result but in quick testing it definitely seems to have achieved all of the goals to varying degrees. |
|
|
|
It sometimes fumbles with tokens in odd places so it's certainly not perfect. Possibly best used as merge-fodder. |
|
|
|
Trained using [qlora-pipe](https://github.com/tdrussell/qlora-pipe) |
|
|