license: other | |
tags: | |
- merge | |
- mergekit | |
- lazymergekit | |
- microsoft/Orca-2-13b | |
- KoboldAI/LLaMA2-13B-Psyfighter2 | |
license_name: microsoft-research-license | |
base_model: | |
- KoboldAI/LLaMA2-13B-Psyfighter2 | |
- microsoft/Orca-2-13b | |
# Psyfighter2-Orca2-ties | |
Psyfighter2-Orca2-ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): | |
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) | |
* [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | |
This is my very first merge I have ever attempted. The motivation behind this merge is to try and create a 13B version of [jebcarter/psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B). I don't have a good GPU (GTX 1660 6GB), so although I can merge the model, I cannot actually run it. However, the Open LLM Leaderboard ranks this merge with 63.48 avg point, which is higher than both KoboldAI/LLaMA2-13B-Psyfighter2 and jebcarter/psyonic-cetacean-20B, so I must did something right. The next step is to quantize this merge into GGUF so I can actually run it with [KoboldCpp](https://github.com/LostRuins/koboldcpp). | |
## 🧩 Configuration | |
```yaml | |
models: | |
- model: KoboldAI/LLaMA2-13B-Psyfighter2 | |
- model: microsoft/Orca-2-13b | |
parameters: | |
density: 0.40 | |
weight: [0, 0.3, 0.7, 1] | |
merge_method: ties | |
base_model: KoboldAI/LLaMA2-13B-Psyfighter2 | |
parameters: | |
normalize: true | |
int8_mask: true | |
dtype: float16 | |
``` |