metadata
base_model: []
tags:
- mergekit
- merge
0x01-8x7B-hf
here we go again. multi-step merge, various models involved at various ratios with various methods.
this thing came to me in a fever dream when I was hung over, but after slightly tweaking the recipe it turned out surprisingly decent. using with the settings included.
Update:
The following settings have proved to work good too:
- Context: https://files.catbox.moe/q91rca.json
- Instruct: https://files.catbox.moe/2w8ja2.json
- Textgen: https://files.catbox.moe/s25rad.json
Constituent parts
# primordial_slop_a:
- model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
- model: mistralai/Mixtral-8x7B-v0.1+SeanWu25/Mixtral_8x7b_Medicine
- model: mistralai/Mixtral-8x7B-v0.1+SeanWu25/Mixtral_8x7b_WuKurtz
- model: mistralai/Mixtral-8x7B-v0.1+Epiculous/crunchy-onion-lora
- model: mistralai/Mixtral-8x7B-v0.1+maxkretchmer/gc-mixtral
# primordial_slop_b:
- model: Envoid/Mixtral-Instruct-ITR-8x7B
- model: crestf411/daybreak-mixtral-8x7b-v1.0-hf
- model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
- model: orangetin/OpenHermes-Mixtral-8x7B
- model: mistralai/Mixtral-8x7B-Instruct-v0.1+idegroup/PhyAssistant
- model: ycros/crunchy-onion-nx
- model: jondurbin/bagel-dpo-8x7b-v0.2
- model: amoldwalunj/Mixtral-8x7B-Instruct-v0.1-legal_finetune_mixtral_32k
# primordial_slop_c: a+b
# primordial_slop_d:
- model: Sao10K/Sensualize-Mixtral-bf16
- model: Envoid/Mixtral-Instruct-ITR-DADA-8x7B
Quantized versions:
- GGUF iMat: Quant-Cartel/0x01-8x7b-iMat-GGUF
- exl2 rpcal: Quant-Cartel/0x01-8x7b-exl2-rpcal