--- license: apache-2.0 tags: - merge - mergekit - CorticalStack/pastiche-crown-clown-7b-dare-dpo - Equall/Saul-Instruct-v1 --- ![image/webp](https://cdn.prod.arcee.ai/images/saul-calme.jpeg) # Saul-Instruct-Clown-7b Saul-Instruct-Clown-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo) * [Equall/Saul-Instruct-v1](https://huggingface.co/Equall/Saul-Instruct-v1) ## 🏆 Evaluation ### OpenLLM Saul-Instruct-Clown-7b OpenLLM benchmark suite | Model | Average | arc | HellaSwag | mmlu | TruthfulQA | gsm8k | |---|---:|---:|---:|---:|---:|---:| | [arcee-ai/Saul-Instruct-Clown-7b](https://huggingface.co/arcee-ai/Saul-Instruct-Clown-7b/) | 72.79 | 68.26 | 86.28 | 63.12 | 64.68 | 83.43 | 70.96 | ## 🧩 Configuration ```yaml slices: - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: [0, 32] - model: Equall/Saul-Instruct-v1 layer_range: [0, 32] merge_method: slerp base_model: CorticalStack/pastiche-crown-clown-7b-dare-dpo parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```