Casual-Autopsy commited on
Commit
9e106ff
1 Parent(s): 8a380d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +184 -33
README.md CHANGED
@@ -1,28 +1,204 @@
1
  ---
2
- base_model:
3
- - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
4
- - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
5
- - Nitral-AI/Hathor_Stable-v0.2-L3-8B
6
- - Sao10K/L3-8B-Stheno-v3.1
7
  tags:
8
  - merge
9
  - mergekit
10
  - lazymergekit
 
 
 
 
 
 
 
 
 
 
 
11
  - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
 
 
 
 
 
 
 
 
12
  - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
13
  - Nitral-AI/Hathor_Stable-v0.2-L3-8B
14
  - Sao10K/L3-8B-Stheno-v3.1
15
  ---
16
 
17
- # L3-Umbral-Mind-RP-v2.0-8B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- L3-Umbral-Mind-RP-v2.0-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
20
  * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
 
 
 
 
 
 
 
 
21
  * [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K)
22
  * [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
23
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
24
 
25
- ## 🧩 Configuration
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ```yaml
28
  models:
@@ -42,29 +218,4 @@ models:
42
  merge_method: task_arithmetic
43
  base_model: Casual-Autopsy/Umbral-Mind-3
44
  dtype: bfloat16
45
- ```
46
-
47
- ## 💻 Usage
48
-
49
- ```python
50
- !pip install -qU transformers accelerate
51
-
52
- from transformers import AutoTokenizer
53
- import transformers
54
- import torch
55
-
56
- model = "Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B"
57
- messages = [{"role": "user", "content": "What is a large language model?"}]
58
-
59
- tokenizer = AutoTokenizer.from_pretrained(model)
60
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
61
- pipeline = transformers.pipeline(
62
- "text-generation",
63
- model=model,
64
- torch_dtype=torch.float16,
65
- device_map="auto",
66
- )
67
-
68
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
69
- print(outputs[0]["generated_text"])
70
  ```
 
1
  ---
 
 
 
 
 
2
  tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
+ - not-for-all-audiences
7
+ - nsfw
8
+ - rp
9
+ - roleplay
10
+ - role-play
11
+ license: llama3
12
+ language:
13
+ - en
14
+ library_name: transformers
15
+ pipeline_tag: text-generation
16
+ base_model:
17
  - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
18
+ - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
19
+ - Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
20
+ - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
21
+ - tannedbum/L3-Nymeria-8B
22
+ - migtissera/Llama-3-8B-Synthia-v3.5
23
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
24
+ - tannedbum/L3-Nymeria-Maid-8B
25
+ - Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
26
  - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
27
  - Nitral-AI/Hathor_Stable-v0.2-L3-8B
28
  - Sao10K/L3-8B-Stheno-v3.1
29
  ---
30
 
31
+ <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;">
32
+ Image by ろ47
33
+
34
+ # Merge
35
+
36
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
37
+
38
+ ## Merge Details
39
+
40
+ The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
41
+ - Mental illness
42
+ - Self-harm
43
+ - Trauma
44
+ - Suicide
45
+
46
+ I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes,
47
+ but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably.
48
+
49
+ If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
50
+
51
+ ### Usage Info
52
+
53
+ This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
54
+
55
+ ### Quants
56
+
57
+
58
+
59
+ ### Merge Method
60
+
61
+ This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge, followed by another Task Arithmetic merge with a model containing psychology data.
62
+
63
+ ### Models Merged
64
 
65
+ The following models were included in the merge:
66
  * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
67
+ * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
68
+ * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B)
69
+ * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2)
70
+ * [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
71
+ * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
72
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
73
+ * [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
74
+ * [Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B](https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B)
75
  * [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K)
76
  * [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
77
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
78
 
79
+ ## Secret Sauce
80
+
81
+ The following YAML configurations were used to produce this model:
82
+
83
+ ### Umbral-1
84
+
85
+ ```yaml
86
+ models:
87
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
88
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
89
+ parameters:
90
+ density: 0.45
91
+ weight: 0.4
92
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
93
+ parameters:
94
+ density: 0.65
95
+ weight: 0.1
96
+ merge_method: dare_ties
97
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
98
+ parameters:
99
+ int8_mask: true
100
+ dtype: bfloat16
101
+ ```
102
+
103
+ ### Umbral-2
104
+
105
+ ```yaml
106
+ models:
107
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
108
+ - model: tannedbum/L3-Nymeria-8B
109
+ parameters:
110
+ density: 0.45
111
+ weight: 0.25
112
+ - model: migtissera/Llama-3-8B-Synthia-v3.5
113
+ parameters:
114
+ density: 0.65
115
+ weight: 0.25
116
+ merge_method: dare_ties
117
+ base_model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
118
+ parameters:
119
+ int8_mask: true
120
+ dtype: bfloat16
121
+ ```
122
+
123
+ ### Umbral-3
124
+
125
+ ```yaml
126
+ models:
127
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
128
+ - model: tannedbum/L3-Nymeria-Maid-8B
129
+ parameters:
130
+ density: 0.4
131
+ weight: 0.3
132
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
133
+ parameters:
134
+ density: 0.6
135
+ weight: 0.2
136
+ merge_method: dare_ties
137
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
138
+ parameters:
139
+ int8_mask: true
140
+ dtype: bfloat16
141
+ ```
142
+
143
+ ### Mopey-Omelette
144
+
145
+ ```yaml
146
+ models:
147
+ - model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
148
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
149
+ parameters:
150
+ weight: 0.15
151
+ merge_method: task_arithmetic
152
+ base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
153
+ dtype: bfloat16
154
+ ```
155
+
156
+ ### Umbral-Mind-1
157
+
158
+ ```yaml
159
+ models:
160
+ - model: Casual-Autopsy/Umbral-1
161
+ - model: Casual-Autopsy/Umbral-3
162
+ merge_method: slerp
163
+ base_model: Casual-Autopsy/Umbral-1
164
+ parameters:
165
+ t:
166
+ - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
167
+ embed_slerp: true
168
+ dtype: bfloat16
169
+ ```
170
+
171
+ ### Umbral-Mind-2
172
+
173
+ ```yaml
174
+ models:
175
+ - model: Casual-Autopsy/Umbral-Mind-1
176
+ - model: Casual-Autopsy/Umbral-2
177
+ merge_method: slerp
178
+ base_model: Casual-Autopsy/Umbral-Mind-1
179
+ parameters:
180
+ t:
181
+ - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
182
+ embed_slerp: true
183
+ dtype: bfloat16
184
+ ```
185
+
186
+ ### Umbral-Mind-3
187
+
188
+ ```yaml
189
+ models:
190
+ - model: Casual-Autopsy/Umbral-Mind-2
191
+ - model: Casual-Autopsy/Mopey-Omelette
192
+ merge_method: slerp
193
+ base_model: Casual-Autopsy/Umbral-Mind-2
194
+ parameters:
195
+ t:
196
+ - value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
197
+ embed_slerp: true
198
+ dtype: bfloat16
199
+ ```
200
+
201
+ ### L3-Umbral-Mind-RP-v2.0-8B
202
 
203
  ```yaml
204
  models:
 
218
  merge_method: task_arithmetic
219
  base_model: Casual-Autopsy/Umbral-Mind-3
220
  dtype: bfloat16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
221
  ```