Not-For-All-Audiences
license: cc-by-nc-4.0 | |
tags: | |
- not-for-all-audiences | |
Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset. | |
Branches: | |
- `main` -- `measurement.json` | |
- `2.25b6h` -- 2.25bpw, 6bit lm_head | |
- `3.5b6h` -- 3.5bpw, 6bit lm_head | |
- `3.7b6h` -- 3.7bpw, 6bit lm_head | |
- `6b6h` -- 6bpw, 6bit lm_head | |
Requires ExllamaV2 version 0.0.11 and up. | |
Original model link: [Envoid/CybersurferNyandroidLexicat-8x7B](https://huggingface.co/Envoid/CybersurferNyandroidLexicat-8x7B) | |
Original model README below. | |
*** | |
# Warning: This model is experimental and unpredictable | |
![](https://files.catbox.moe/gvp3q3.png) | |
### CybersurferNyandroidLexicat-8x7B (I was in a silly mood when I made this edition) | |
Is a linear merge of the following models: | |
[Verdict-DADA-8x7B](https://huggingface.co/Envoid/Verdict-DADA-8x7B) 60% | |
[crestf411/daybreak-mixtral-8x7b-v1.0-hf](https://huggingface.co/crestf411/daybreak-mixtral-8x7b-v1.0-hf) 30% | |
Experimental unreleased merge 10% | |
I find its output as an assistant to be less dry and it is stable and imaginative in brief roleplay testing. Tested with simple sampling, requires rerolls but when it's *good* it's **good**. I can't say how well it will be when the context fills up but I was pleasantly surprised. | |
It definitely has one of the most varied lexicons out of any Mixtral Instruct based model I've tested so far with excellent attention to detail with respect to context. | |
### It likes Libra styled prompt formats with [INST] context [/INST] formatting | |
[Which can easily be adapted from the format specified in the Libra32B repo by replacing alpaca formatting with mixtruct formatting](https://huggingface.co/Envoid/Libra-32B) | |
### As always tested in Q8 (not included) |