InferenceIllusionist commited on
Commit
15068e5
1 Parent(s): e5a8c95

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - gguf
4
+ ---
5
+
6
+ # Model Card for maid-yuzu-v8-GGUF
7
+ - Model creator: [rhplus0831](https://huggingface.co/rhplus0831/)
8
+ - Original model: [maid-yuzu-v8](https://huggingface.co/rhplus0831/maid-yuzu-v8)
9
+
10
+
11
+ Quantized from fp16 with love.
12
+
13
+ Uploading Q8_0 & Q5_K_M for starters, other sizes available upon request.
14
+
15
+
16
+ See original model card details below.
17
+
18
+
19
+ ---
20
+
21
+ ---
22
+ # maid-yuzu-v8
23
+
24
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
25
+
26
+ v7's approach worked better than I thought, so I tried something even weirder as a test. I don't think a proper model will come out, but I'm curious about the results.
27
+
28
+ ## Merge Details
29
+ ### Merge Method
30
+
31
+ This models were merged using the SLERP method in the following order:
32
+
33
+ maid-yuzu-v8-base: mistralai/Mixtral-8x7B-v0.1 + mistralai/Mixtral-8x7B-Instruct-v0.1 = 0.5
34
+ maid-yuzu-v8-step1: above + jondurbin/bagel-dpo-8x7b-v0.2 = 0.25
35
+ maid-yuzu-v8-step2: above + cognitivecomputations/dolphin-2.7-mixtral-8x7b = 0.25
36
+ maid-yuzu-v8-step3: above + NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss = 0.25
37
+ maid-yuzu-v8-step4: above + ycros/BagelMIsteryTour-v2-8x7B = 0.25
38
+ maid-yuzu-v8: above + smelborp/MixtralOrochi8x7B = 0.25
39
+
40
+ ### Models Merged
41
+
42
+ The following models were included in the merge:
43
+ * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
44
+ * ../maid-yuzu-v8-step4
45
+
46
+ ### Configuration
47
+
48
+ The following YAML configuration was used to produce this model:
49
+
50
+ ```yaml
51
+ base_model:
52
+ model:
53
+ path: ../maid-yuzu-v8-step4
54
+ dtype: bfloat16
55
+ merge_method: slerp
56
+ parameters:
57
+ t:
58
+ - value: 0.25
59
+ slices:
60
+ - sources:
61
+ - layer_range: [0, 32]
62
+ model:
63
+ model:
64
+ path: ../maid-yuzu-v8-step4
65
+ - layer_range: [0, 32]
66
+ model:
67
+ model:
68
+ path: smelborp/MixtralOrochi8x7B
69
+ ```