|
--- |
|
base_model: [] |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
``` |
|
e88 88e d8 |
|
d888 888b 8888 8888 ,"Y88b 888 8e d88 |
|
C8888 8888D 8888 8888 "8" 888 888 88b d88888 |
|
Y888 888P Y888 888P ,ee 888 888 888 888 |
|
"88 88" "88 88" "88 888 888 888 888 |
|
b |
|
8b, |
|
|
|
e88'Y88 d8 888 |
|
d888 'Y ,"Y88b 888,8, d88 ,e e, 888 |
|
C8888 "8" 888 888 " d88888 d88 88b 888 |
|
Y888 ,d ,ee 888 888 888 888 , 888 |
|
"88,d88 "88 888 888 888 "YeeP" 888 |
|
|
|
PROUDLY PRESENTS |
|
``` |
|
# 0x01-8x7b-exl2-rpcal |
|
Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset. |
|
|
|
Branches: |
|
- `main` -- `measurement.json` |
|
- `2.25b6h` -- 2.25bpw, 6bit lm_head |
|
- `3.5b6h` -- 3.5bpw, 6bit lm_head |
|
- `6b6h` -- 6bpw, 6bit lm_head |
|
|
|
|
|
Requires ExllamaV2 version 0.0.12 and up. |
|
|
|
Original model link: [rAIfle/0x01-8x7b-hf](https://huggingface.co/rAIfle/0x01-8x7b-hf) |
|
|
|
Original model README below. |
|
|
|
*** |
|
# 0x01-7x8B-hf |
|
|
|
![grinning female android, cyberpunk, robotic, biomechanical, serial number "0x01"](https://files.catbox.moe/je2zar.png) |
|
|
|
here we go again. multi-step merge, various models involved at various ratios with various methods. |
|
|
|
this thing came to me in a fever dream when I was hung over, but after slightly tweaking the recipe it turned out surprisingly decent. using with the settings included. |
|
|
|
## Update: |
|
The following settings have proved to work good too: |
|
- Context: https://files.catbox.moe/q91rca.json |
|
- Instruct: https://files.catbox.moe/2w8ja2.json |
|
- Textgen: https://files.catbox.moe/s25rad.json |
|
|
|
|
|
## Constituent parts |
|
```yaml |
|
# primordial_slop_a: |
|
- model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs |
|
- model: mistralai/Mixtral-8x7B-v0.1+SeanWu25/Mixtral_8x7b_Medicine |
|
- model: mistralai/Mixtral-8x7B-v0.1+SeanWu25/Mixtral_8x7b_WuKurtz |
|
- model: mistralai/Mixtral-8x7B-v0.1+Epiculous/crunchy-onion-lora |
|
- model: mistralai/Mixtral-8x7B-v0.1+maxkretchmer/gc-mixtral |
|
# primordial_slop_b: |
|
- model: Envoid/Mixtral-Instruct-ITR-8x7B |
|
- model: crestf411/daybreak-mixtral-8x7b-v1.0-hf |
|
- model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO |
|
- model: orangetin/OpenHermes-Mixtral-8x7B |
|
- model: mistralai/Mixtral-8x7B-Instruct-v0.1+idegroup/PhyAssistant |
|
- model: ycros/crunchy-onion-nx |
|
- model: jondurbin/bagel-dpo-8x7b-v0.2 |
|
- model: amoldwalunj/Mixtral-8x7B-Instruct-v0.1-legal_finetune_mixtral_32k |
|
# primordial_slop_c: a+b |
|
# primordial_slop_d: |
|
- model: Sao10K/Sensualize-Mixtral-bf16 |
|
- model: Envoid/Mixtral-Instruct-ITR-DADA-8x7B |
|
``` |
|
|
|
# mergekit |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the SLERP merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* ./primordial_slop_d |
|
* ./primordial_slop_c |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: ./primordial_slop_c |
|
- model: ./primordial_slop_d |
|
merge_method: slerp |
|
base_model: ./primordial_slop_c |
|
parameters: |
|
t: |
|
- value: 0.33 |
|
dtype: float16 |
|
|
|
``` |
|
|