--- base_model: - ArliAI/ArliAI-Llama-3-8B-Formax-v1.0 - gradientai/Llama-3-8B-Instruct-Gradient-1048k - ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0 - Sao10K/L3-8B-Niitama-v1 - Sao10K/L3-8B-Stheno-v3.3-32K - tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b - Sao10K/L3-8B-Tamamo-v1 - vicgalle/Roleplay-Hermes-3-Llama-3.1-8B library_name: transformers tags: - mergekit - merge --- HUZZAH, a model that's actually good! Just took seven tries. Fixed spacial understanding and literacy, toned down a little of the clingy instruct following, and more turn based RP forward. ### Quants [A few GGUFs](https://huggingface.co/kromquant/L3.1-Siithamo-v0.4-8B-GGUFs) by me. ### Details & Recommended Settings (Still testing; details subject to change) Sticks to instructs well, dynamic writing, roleplay focused generations, and more solid intelligence. Less rambley though still outputs a bit of text. Has near perfect recall up to 32K. Be clear and explicit with model instructs, including the intended format (Asterix, quotes, etc). Rec. Settings: ``` Template: L3 Temperature: 1.3 Min P: 0.1 Repeat Penalty: 1.05 Repeat Penalty Tokens: 256 Dyn Temp: 0.9-1.05 at 0.1 Smooth Sampl: 0.18 ``` ### Merge Theory This sucked ### Config