Not-For-All-Audiences
rAIfle commited on
Commit
43bec2c
1 Parent(s): 57486db

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - not-for-all-audiences
5
+ ---
6
+ Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.
7
+
8
+ Branches:
9
+ - `main` -- `measurement.json`
10
+ - `2.25b6h` -- 2.25bpw, 6bit lm_head
11
+ - `3.5b6h` -- 3.5bpw, 6bit lm_head
12
+ - `3.7b6h` -- 3.7bpw, 6bit lm_head
13
+ - `6b6h` -- 6bpw, 6bit lm_head
14
+
15
+ Requires ExllamaV2 version 0.0.11 and up.
16
+
17
+ Original model link: [Envoid/CybersurferNyandroidLexicat-8x7B](https://huggingface.co/Envoid/CybersurferNyandroidLexicat-8x7B)
18
+
19
+ Original model README below.
20
+
21
+ ***
22
+
23
+ # Warning: This model is experimental and unpredictable
24
+ ![](https://files.catbox.moe/gvp3q3.png)
25
+ ### CybersurferNyandroidLexicat-8x7B (I was in a silly mood when I made this edition)
26
+ Is a linear merge of the following models:
27
+
28
+ [Verdict-DADA-8x7B](https://huggingface.co/Envoid/Verdict-DADA-8x7B) 60%
29
+
30
+ [crestf411/daybreak-mixtral-8x7b-v1.0-hf](https://huggingface.co/crestf411/daybreak-mixtral-8x7b-v1.0-hf) 30%
31
+
32
+ Experimental unreleased merge 10%
33
+
34
+ I find its output as an assistant to be less dry and it is stable and imaginative in brief roleplay testing. Tested with simple sampling, requires rerolls but when it's *good* it's **good**. I can't say how well it will be when the context fills up but I was pleasantly surprised.
35
+
36
+ It definitely has one of the most varied lexicons out of any Mixtral Instruct based model I've tested so far with excellent attention to detail with respect to context.
37
+
38
+ ### It likes Libra styled prompt formats with [INST] context [/INST] formatting
39
+ [Which can easily be adapted from the format specified in the Libra32B repo by replacing alpaca formatting with mixtruct formatting](https://huggingface.co/Envoid/Libra-32B)
40
+
41
+
42
+ ### As always tested in Q8 (not included)