Magnum-Instruct-12B
Simple della_linear merge done at a 50/50 split with high density using Mini-Magnum and Nemo Instruct. Nothing fancy to it, really. Seems good on both intelligence and creativity so far.
Big thanks to the MistralAI and Anthracite/SillyTilly teams for the models used!
GGUF quants made by mradermacher:
https://huggingface.co/mradermacher/Magnum-Instruct-12B-GGUF
Settings
Temperature @ 0.7
Min-P @ 0.02
Smoothing Factor @ 0.3
Smoothing Curve @ 1.5
DRY Multiplier (plus standard DRY settings) @ 0.8
Skip Special Tokens @ On
Everything else @ Off
Prompt Format: Nemo-Mistral
[INST] user prompt[/INST] character response</s>[INST] user prompt[/INST]
Models Merged
The following models were included in the merge:
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ParasiticRogue/Magnum-Instruct-12B
Merge model
this model