Masterjp123 commited on
Commit
841ddf4
1 Parent(s): 079f224

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Riiid/sheep-duck-llama-2-13b
4
+ - IkariDev/Athena-v4
5
+ - TheBloke/Llama-2-13B-fp16
6
+ - KoboldAI/LLaMA2-13B-Psyfighter2
7
+ - KoboldAI/LLaMA2-13B-Erebus-v3
8
+ - Henk717/echidna-tiefigther-25
9
+ - Undi95/Unholy-v2-13B
10
+ - ddh0/EstopianOrcaMaid-13b
11
+ tags:
12
+ - mergekit
13
+ - merge
14
+ - not-for-all-audiences
15
+ - ERP
16
+ - RP
17
+ - Roleplay
18
+ - uncensored
19
+ license: llama2
20
+ language:
21
+ - en
22
+ ---
23
+ # Model
24
+ This is the Bf16 unquantized version of SnowyRP And the First Public Release of a Model in the SnowyRP series of models!
25
+
26
+ [FP16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B)
27
+
28
+ [GPTQ](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ)
29
+
30
+ Any Future Quantizations I am made aware of will be added.
31
+
32
+ ## Merge Details
33
+ just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.
34
+
35
+ These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse.
36
+
37
+ This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more.
38
+
39
+ ## Model Use:
40
+
41
+ This model is very good... WITH THE RIGHT SETTINGS.
42
+ I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off.
43
+ ```
44
+ Optimal Settings (so far)
45
+
46
+ Microstat Mode: 2
47
+ tau: 2.95
48
+ eta: 0.05
49
+
50
+ Dynamic Temp
51
+ min: 0.25
52
+ max: 1.8
53
+
54
+ Cut offs
55
+ epsilon: 3
56
+ eta: 3
57
+ ```
58
+
59
+ ### Merge Method
60
+
61
+ This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
62
+
63
+ ### Models Merged
64
+
65
+ The following models were included in the merge:
66
+ * [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
67
+ * [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4)
68
+ * [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
69
+ * [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
70
+ * [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25)
71
+ * [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B)
72
+ * [EstopianOrcaMaid](https://huggingface.co/ddh0/EstopianOrcaMaid-13b)
73
+
74
+ ### Configuration
75
+
76
+ The following YAML configuration was used to produce this model:
77
+
78
+ for P1
79
+ ```yaml
80
+ base_model:
81
+ model:
82
+ path: TheBloke/Llama-2-13B-fp16
83
+ dtype: bfloat16
84
+ merge_method: task_arithmetic
85
+ slices:
86
+ - sources:
87
+ - layer_range: [0, 40]
88
+ model:
89
+ model:
90
+ path: TheBloke/Llama-2-13B-fp16
91
+ - layer_range: [0, 40]
92
+ model:
93
+ model:
94
+ path: Undi95/Unholy-v2-13B
95
+ parameters:
96
+ weight: 1.0
97
+ - layer_range: [0, 40]
98
+ model:
99
+ model:
100
+ path: Henk717/echidna-tiefigther-25
101
+ parameters:
102
+ weight: 0.45
103
+ - layer_range: [0, 40]
104
+ model:
105
+ model:
106
+ path: KoboldAI/LLaMA2-13B-Erebus-v3
107
+ parameters:
108
+ weight: 0.33
109
+ ```
110
+
111
+ for P2
112
+ ```yaml
113
+ base_model:
114
+ model:
115
+ path: TheBloke/Llama-2-13B-fp16
116
+ dtype: bfloat16
117
+ merge_method: task_arithmetic
118
+ slices:
119
+ - sources:
120
+ - layer_range: [0, 40]
121
+ model:
122
+ model:
123
+ path: TheBloke/Llama-2-13B-fp16
124
+ - layer_range: [0, 40]
125
+ model:
126
+ model:
127
+ path: KoboldAI/LLaMA2-13B-Psyfighter2
128
+ parameters:
129
+ weight: 1.0
130
+ - layer_range: [0, 40]
131
+ model:
132
+ model:
133
+ path: Riiid/sheep-duck-llama-2-13b
134
+ parameters:
135
+ weight: 0.45
136
+ - layer_range: [0, 40]
137
+ model:
138
+ model:
139
+ path: IkariDev/Athena-v4
140
+ parameters:
141
+ weight: 0.33
142
+ ```
143
+
144
+ for the final merge
145
+ ```yaml
146
+ base_model:
147
+ model:
148
+ path: TheBloke/Llama-2-13B-fp16
149
+ dtype: bfloat16
150
+ merge_method: ties
151
+ parameters:
152
+ int8_mask: 1.0
153
+ normalize: 1.0
154
+ slices:
155
+ - sources:
156
+ - layer_range: [0, 40]
157
+ model:
158
+ model:
159
+ path: ddh0/EstopianOrcaMaid-13b
160
+ parameters:
161
+ density: [1.0, 0.7, 0.1]
162
+ weight: 1.0
163
+ - layer_range: [0, 40]
164
+ model:
165
+ model:
166
+ path: Masterjp123/snowyrpp1
167
+ parameters:
168
+ density: 0.5
169
+ weight: [0.0, 0.3, 0.7, 1.0]
170
+ - layer_range: [0, 40]
171
+ model:
172
+ model:
173
+ path: Masterjp123/snowyrpp2
174
+ parameters:
175
+ density: 0.33
176
+ weight:
177
+ - filter: mlp
178
+ value: 0.5
179
+ - value: 0.0
180
+ - layer_range: [0, 40]
181
+ model:
182
+ model:
183
+ path: TheBloke/Llama-2-13B-fp16
184
+ ```