--- license: other library_name: transformers tags: - mergekit - merge base_model: - unsloth/Mistral-Small-Instruct-2409 license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md model-index: - name: MS-Meadowlark-22B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 66.97 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allura-org/MS-Meadowlark-22B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 30.3 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allura-org/MS-Meadowlark-22B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 14.12 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allura-org/MS-Meadowlark-22B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 10.07 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allura-org/MS-Meadowlark-22B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 5.53 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allura-org/MS-Meadowlark-22B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.37 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allura-org/MS-Meadowlark-22B name: Open LLM Leaderboard --- # MS-Meadowlark-22B
Big thanks to @inflatebot for the image.
A roleplay and storywriting model based on Mistral Small 22B. GGUF models: https://huggingface.co/mradermacher/MS-Meadowlark-22B-GGUF/ EXL2 models: https://huggingface.co/CalamitousFelicitousness/MS-Meadowlark-22B-exl2 Datasets used in this model: - [Dampfinchen/Creative_Writing_Multiturn](https://huggingface.co/datasets/Dampfinchen/Creative_Writing_Multiturn) at 16k - [Fizzarolli/rosier-dataset](https://huggingface.co/datasets/Fizzarolli/rosier-dataset) + [Alfitaria/body-inflation-org](https://huggingface.co/datasets/Alfitaria/body-inflation-org) at 16k - [ToastyPigeon/SpringDragon](https://huggingface.co/datasets/ToastyPigeon/SpringDragon) at 8k Each dataset was trained separately onto Mistral Small Instruct, and then the component models were merged along with [nbeerbower/Mistral-Small-Gutenberg-Doppel-22B](https://huggingface.co/nbeerbower/Mistral-Small-Gutenberg-Doppel-22B) to create Meadowlark. I tried different blends of the component models, and this one seems to be the most stable while retaining creativity and unpredictability added by the trained data. # Instruct Format Rosier/bodyinf and SpringDragon were trained in completion format. This model should work with [Kobold Lite](https://lite.koboldai.net/) in Adventure Mode and Story Mode. Creative_Writing_Multiturn and Gutenberg-Doppel were trained using the official instruct format of Mistral Small Instruct: ``` [INST] {User message}[/INST] {Assistant response} ``` This is the Mistral Small V2&V3 preset in SillyTavern and Kobold Lite. For SillyTavern in particular I've had better luck getting good output from Mistral Small using a [custom instruct template](https://huggingface.co/ToastyPigeon/ST-Presets-Mistral-Small) that formats the assembled context as a single user turn. This prevents SillyTavern from confusing the model by assembling user/assistant turns in a nonstandard way. Note: This preset is *not* compatible with Stepped Thinking, use the Mistral V2&V3 preset for that. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allura-org__MS-Meadowlark-22B) | Metric |Value| |-------------------|----:| |Avg. |26.39| |IFEval (0-Shot) |66.97| |BBH (3-Shot) |30.30| |MATH Lvl 5 (4-Shot)|14.12| |GPQA (0-shot) |10.07| |MuSR (0-shot) | 5.53| |MMLU-PRO (5-shot) |31.37|