--- base_model: - appvoid/palmer-003 - TinyLlama/TinyLlama-1.1B-Chat-v1.0 - matlok/tinyllama-cinder-openhermes-32k - BEE-spoke-data/TinyLlama-1.1bee - microsoft/rho-math-1b-interpreter-v0.1 - ShieldX/manovyadh-1.1B-v1-chat - raidhon/coven_tiny_1.1b_32k_orpo_alpha library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) as a base. ### Models Merged The following models were included in the merge: * [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) * [matlok/tinyllama-cinder-openhermes-32k](https://huggingface.co/matlok/tinyllama-cinder-openhermes-32k) * [BEE-spoke-data/TinyLlama-1.1bee](https://huggingface.co/BEE-spoke-data/TinyLlama-1.1bee) * [microsoft/rho-math-1b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) * [ShieldX/manovyadh-1.1B-v1-chat](https://huggingface.co/ShieldX/manovyadh-1.1B-v1-chat) * [raidhon/coven_tiny_1.1b_32k_orpo_alpha](https://huggingface.co/raidhon/coven_tiny_1.1b_32k_orpo_alpha) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: BEE-spoke-data/TinyLlama-1.1bee parameters: density: 0.33 weight: 0.50 - model: raidhon/coven_tiny_1.1b_32k_orpo_alpha parameters: density: 0.36 weight: 0.40 - model: ShieldX/manovyadh-1.1B-v1-chat parameters: density: 0.33 weight: 0.30 - model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 parameters: density: 0.40 weight: 0.45 - model: matlok/tinyllama-cinder-openhermes-32k parameters: density: 0.32 weight: 0.26 - model: microsoft/rho-math-1b-interpreter-v0.1 parameters: density: 0.38 weight: 0.35 merge_method: dare_linear base_model: appvoid/palmer-003 parameters: normalize: false int8_mask: true dtype: float16 ```