|
--- |
|
tags: |
|
- gguf |
|
--- |
|
|
|
# Model Card for maid-yuzu-v8-GGUF |
|
- Model creator: [rhplus0831](https://huggingface.co/rhplus0831/) |
|
- Original model: [maid-yuzu-v8](https://huggingface.co/rhplus0831/maid-yuzu-v8) |
|
|
|
|
|
Quantized from fp16 with love. |
|
|
|
Uploading Q8_0 & Q5_K_M for starters, other sizes available upon request. |
|
|
|
|
|
See original model card details below. |
|
|
|
|
|
--- |
|
# maid-yuzu-v8 |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
v7's approach worked better than I thought, so I tried something even weirder as a test. I don't think a proper model will come out, but I'm curious about the results. |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This models were merged using the SLERP method in the following order: |
|
|
|
maid-yuzu-v8-base: mistralai/Mixtral-8x7B-v0.1 + mistralai/Mixtral-8x7B-Instruct-v0.1 = 0.5 |
|
maid-yuzu-v8-step1: above + jondurbin/bagel-dpo-8x7b-v0.2 = 0.25 |
|
maid-yuzu-v8-step2: above + cognitivecomputations/dolphin-2.7-mixtral-8x7b = 0.25 |
|
maid-yuzu-v8-step3: above + NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss = 0.25 |
|
maid-yuzu-v8-step4: above + ycros/BagelMIsteryTour-v2-8x7B = 0.25 |
|
maid-yuzu-v8: above + smelborp/MixtralOrochi8x7B = 0.25 |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) |
|
* ../maid-yuzu-v8-step4 |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
base_model: |
|
model: |
|
path: ../maid-yuzu-v8-step4 |
|
dtype: bfloat16 |
|
merge_method: slerp |
|
parameters: |
|
t: |
|
- value: 0.25 |
|
slices: |
|
- sources: |
|
- layer_range: [0, 32] |
|
model: |
|
model: |
|
path: ../maid-yuzu-v8-step4 |
|
- layer_range: [0, 32] |
|
model: |
|
model: |
|
path: smelborp/MixtralOrochi8x7B |
|
``` |
|
|