--- base_model: - win10/Phi-3.5-mini-instruct-24-9-29 - FreedomIntelligence/Apollo2-3.8B - ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 - AXCXEPT/Borea-Phi-3.5-mini-Instruct-Common - microsoft/Phi-3.5-mini-instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) as a base. ### Models Merged The following models were included in the merge: * [win10/Phi-3.5-mini-instruct-24-9-29](https://huggingface.co/win10/Phi-3.5-mini-instruct-24-9-29) * [FreedomIntelligence/Apollo2-3.8B](https://huggingface.co/FreedomIntelligence/Apollo2-3.8B) * [ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) * [AXCXEPT/Borea-Phi-3.5-mini-Instruct-Common](https://huggingface.co/AXCXEPT/Borea-Phi-3.5-mini-Instruct-Common) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: microsoft/Phi-3.5-mini-instruct #no parameters necessary for base model - model: FreedomIntelligence/Apollo2-3.8B parameters: density: 1 weight: 1 - model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 parameters: density: 1 weight: 1 - model: AXCXEPT/Borea-Phi-3.5-mini-Instruct-Common parameters: density: 1 weight: 1 - model: win10/Phi-3.5-mini-instruct-24-9-29 parameters: density: 1 weight: 1 merge_method: ties base_model: microsoft/Phi-3.5-mini-instruct parameters: normalize: false int8_mask: true dtype: float16 ```