--- base_model: - nbeerbower/Mistral-Small-Drummer-22B - gghfez/SeminalRP-22b - ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 - TheDrummer/Cydonia-22B-v1.1 - anthracite-org/magnum-v4-22b library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) as a base. ### Models Merged The following models were included in the merge: * [nbeerbower/Mistral-Small-Drummer-22B](https://huggingface.co/nbeerbower/Mistral-Small-Drummer-22B) * [gghfez/SeminalRP-22b](https://huggingface.co/gghfez/SeminalRP-22b) * [TheDrummer/Cydonia-22B-v1.1](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1) * [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 parameters: epsilon: 0.04 lambda: 1.05 int8_mask: true rescale: true normalize: false dtype: bfloat16 tokenizer_source: base merge_method: della_linear models: - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 parameters: weight: [0.2, 0.3, 0.2, 0.3, 0.2] density: [0.45, 0.55, 0.45, 0.55, 0.45] - model: gghfez/SeminalRP-22b parameters: weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421] density: [0.6, 0.4, 0.5, 0.4, 0.6] - model: anthracite-org/magnum-v4-22b parameters: weight: [0.208, 0.139, 0.139, 0.139, 0.208] density: [0.7] - model: TheDrummer/Cydonia-22B-v1.1 parameters: weight: [0.208, 0.139, 0.139, 0.139, 0.208] density: [0.7] - model: nbeerbower/Mistral-Small-Drummer-22B parameters: weight: [0.33] density: [0.45, 0.55, 0.45, 0.55, 0.45] ```