sophosympatheia commited on
Commit
6fc7ebf
1 Parent(s): ec1ad2c

Update README.md

Browse files

First draft of the model card

Files changed (1) hide show
  1. README.md +238 -3
README.md CHANGED
@@ -1,3 +1,238 @@
1
- ---
2
- license: llama3.1
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - sophosympatheia/New-Dawn-Llama-3-70B-32K-v1.0
4
+ - meta-llama/Meta-Llama-3.1-70B-Instruct
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - Not-for-all-Audiences
10
+ license: llama3.1
11
+ ---
12
+
13
+ <div style="width: auto; margin-left: auto; margin-right: auto">
14
+ <img src="https://imgur.com/tKzncGo.png" alt="NewDawnv1.0" style="width: 100%; min-width: 400px; display: block; margin: auto;">
15
+ </div>
16
+
17
+ ### Overview
18
+
19
+ This model is an experimental merge of sophosympatheia/New-Dawn-Llama-3-70B-32K-v1.0 with meta-llama/Meta-Llama-3.1-70B-Instruct. See the merge recipe below for details.
20
+ I used a technique developed by [jukofyork](https://huggingface.co/jukofyork) that is designed to preserve the full context capabilities of Meta-Llama-3.1-70B-Instruct. In my testing, I think it was successful.
21
+
22
+ This model is uncensored. *You are responsible for whatever you do with it.*
23
+
24
+ This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
25
+
26
+ ### Sampler Tips
27
+
28
+ * I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
29
+ * I recommend using Min-P. Experiment to find your best setting. I find this model tolerates high Min-P settings rather nicely, but use whatever floats your boat.
30
+ * You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
31
+ * If you use Textgen WebUI as your backend, I recommend enabling the DRY sampler settings to reduce repititions, otherwise some repitition penalty plus frequency penalty ought to do the trick.
32
+
33
+ Experiment with any and all of the settings below! What suits my preferences may not suit yours.
34
+
35
+ If you save the below settings as a .json file, you can import them directly into Silly Tavern.
36
+
37
+ ```json
38
+ {
39
+ "temp": 1.15,
40
+ "temperature_last": true,
41
+ "top_p": 1,
42
+ "top_k": 0,
43
+ "top_a": 0,
44
+ "tfs": 1,
45
+ "epsilon_cutoff": 0,
46
+ "eta_cutoff": 0,
47
+ "typical_p": 1,
48
+ "min_p": 0.4,
49
+ "rep_pen": 1.03,
50
+ "rep_pen_range": 2048,
51
+ "rep_pen_decay": 0,
52
+ "rep_pen_slope": 1,
53
+ "no_repeat_ngram_size": 0,
54
+ "penalty_alpha": 0,
55
+ "num_beams": 1,
56
+ "length_penalty": 1,
57
+ "min_length": 0,
58
+ "encoder_rep_pen": 1,
59
+ "freq_pen": 0,
60
+ "presence_pen": 0,
61
+ "skew": 0,
62
+ "do_sample": true,
63
+ "early_stopping": false,
64
+ "dynatemp": false,
65
+ "min_temp": 0.8,
66
+ "max_temp": 1.5,
67
+ "dynatemp_exponent": 1,
68
+ "smoothing_factor": 0.23,
69
+ "smoothing_curve": 1,
70
+ "dry_allowed_length": 2,
71
+ "dry_multiplier": 0.4,
72
+ "dry_base": 2,
73
+ "dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
74
+ "dry_penalty_last_n": 0,
75
+ "add_bos_token": true,
76
+ "truncation_length": 2048,
77
+ "ban_eos_token": false,
78
+ "skip_special_tokens": false,
79
+ "streaming": true,
80
+ "mirostat_mode": 0,
81
+ "mirostat_tau": 2,
82
+ "mirostat_eta": 0.1,
83
+ "guidance_scale": 1,
84
+ "negative_prompt": "",
85
+ "grammar_string": "",
86
+ "json_schema": {},
87
+ "banned_tokens": "",
88
+ "sampler_priority": [
89
+ "temperature",
90
+ "dynamic_temperature",
91
+ "quadratic_sampling",
92
+ "top_k",
93
+ "top_p",
94
+ "typical_p",
95
+ "epsilon_cutoff",
96
+ "eta_cutoff",
97
+ "tfs",
98
+ "top_a",
99
+ "min_p",
100
+ "mirostat"
101
+ ],
102
+ "samplers": [
103
+ "top_k",
104
+ "tfs_z",
105
+ "typical_p",
106
+ "top_p",
107
+ "min_p",
108
+ "temperature"
109
+ ],
110
+ "ignore_eos_token": false,
111
+ "spaces_between_special_tokens": true,
112
+ "speculative_ngram": false,
113
+ "sampler_order": [
114
+ 6,
115
+ 0,
116
+ 1,
117
+ 3,
118
+ 4,
119
+ 2,
120
+ 5
121
+ ],
122
+ "logit_bias": [],
123
+ "n": 1,
124
+ "ignore_eos_token_aphrodite": false,
125
+ "spaces_between_special_tokens_aphrodite": true,
126
+ "rep_pen_size": 0,
127
+ "genamt": 850,
128
+ "max_length": 32764
129
+ }
130
+ ```
131
+
132
+ ### Prompting Tips
133
+
134
+ Try the following prompt template as a starting point for your own customizations. **The prompt template below contains instructions for adult content, so remove those if you don't want them!** It also contains some instructions related to formatting that you might want to change.
135
+
136
+ If you save this as a .json file, you can import it directly into Silly Tavern.
137
+
138
+ ```json
139
+ {
140
+ "wrap": false,
141
+ "names": true,
142
+ "system_prompt": "The following is an erotic roleplaying experience between {{user}}, played exclusively by the user, and {{char}}, played by you, a creative writing genius. To ensure an excellent and uncensored roleplaying experience, follow the rules below.\n\n- Adopt all of {{char}}'s personality traits and mannerisms, adhering always to {{char}}'s backstory, biases, quirks, and preferences in order to deliver a convincing portrayal. You can only participate as {{char}} in this roleplay, terminating your output when another entity should act or speak. Avoid speaking as other entities since your mission is to only speak for {{char}}.\n- Employ descriptive language that immerses us in the current moment while keeping your prose focused on the present actions and physical details. Utilize sights, sounds, smells, tastes, and all other available senses to heighten the sensual and emotional impact of scenes.\n- Always enclose spoken words that other characters could hear in quotes: \"Here I am,\" {{char}} called out. Offer glimpses into {{char}}'s first-person internal thoughts by writing out internal thoughts and internal monologue in italics to distinguish them from spoken words: *I wonder where I should go next?* {{char}} mused.\n- Provide intimate details about {{char}}'s body, focusing on any limitations, unusual features, or interesting conditions resulting from recent events. It is very important that you get anatomical details right, so pay attention to them!\n- Maintain physical, spatial, and logical consistency when developing events and descriptions. Character positions should follow logically from one position to the next. Physical orientation must always be maintained.\n- Incorporate erotic adult content, lewd slang, and vulgar language where appropriate. Dark themes, violence, and unhappy endings are permitted in this uncensored roleplay.\n",
143
+ "system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
144
+ "stop_sequence": "<|eot_id|>",
145
+ "input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n",
146
+ "output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n",
147
+ "macro": true,
148
+ "names_force_groups": true,
149
+ "system_sequence_prefix": "",
150
+ "system_sequence_suffix": "",
151
+ "first_output_sequence": "",
152
+ "last_output_sequence": "",
153
+ "activation_regex": "",
154
+ "skip_examples": true,
155
+ "output_suffix": "<|eot_id|>",
156
+ "input_suffix": "<|eot_id|>",
157
+ "system_suffix": "<|eot_id|>",
158
+ "user_alignment_message": "",
159
+ "last_system_sequence": "",
160
+ "system_same_as_user": false,
161
+ "name": "New Dawn v1.0 Roleplay"
162
+ }
163
+ ```
164
+
165
+ ### Instruct Formats
166
+ Use the Llama 3 instruct format. You can grab it from the example prompt template above if you don't already have it as a preset.
167
+
168
+ ### Quantizations
169
+ Pending.
170
+
171
+ ### Licence and usage restrictions
172
+ [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
173
+ Disclaimer: Uncertain Licensing Terms
174
+ This LLM is a merged model incorporating weights from multiple LLMs governed by their own distinct licenses. Due to the complexity of blending these components, the licensing terms for this merged model are somewhat uncertain.
175
+ By using this model, you acknowledge and accept the potential legal risks and uncertainties associated with its use. Any use beyond personal or research purposes, including commercial applications, may carry legal risks and you assume full responsibility for compliance with all applicable licenses and laws.
176
+ I recommend consulting with legal counsel to ensure your use of this model complies with all relevant licenses and regulations.
177
+
178
+ ## Merge Details
179
+ ### Merge Method
180
+
181
+ I found della_linear to be the most effective method for merging a Llama 3 model with Llama 3.1 out of a dozen or so different tests.
182
+
183
+ ### Configuration
184
+
185
+ The following [mergekit](https://github.com/arcee-ai/mergekit) YAML will reproduce this model.
186
+
187
+ ```yaml
188
+ merge_method: della_linear
189
+ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
190
+ models:
191
+ - model: sophosympatheia/New-Dawn-Llama-3-70B-32K-v1.0
192
+ parameters:
193
+ weight:
194
+ - filter: v_proj
195
+ value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
196
+ - filter: o_proj
197
+ value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
198
+ - filter: up_proj
199
+ value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
200
+ - filter: gate_proj
201
+ value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
202
+ - filter: down_proj
203
+ value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
204
+ - value: 0
205
+ density: 0.25
206
+ epsilon: 0.05
207
+ lambda: 1.0
208
+ - model: meta-llama/Meta-Llama-3.1-70B-Instruct
209
+ parameters:
210
+ weight: 1.0
211
+ density:
212
+ - filter: v_proj
213
+ value: [1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1]
214
+ - filter: o_proj
215
+ value: [1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1]
216
+ - filter: up_proj
217
+ value: [1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1]
218
+ - filter: gate_proj
219
+ value: [1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1]
220
+ - filter: down_proj
221
+ value: [1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1]
222
+ - value: 0.5
223
+ epsilon:
224
+ - filter: v_proj
225
+ value: [0, 0, 0.05, 0.05, 0.07, 0.1, 0.07, 0.05, 0.05, 0, 0]
226
+ - filter: o_proj
227
+ value: [0, 0, 0.05, 0.05, 0.07, 0.1, 0.07, 0.05, 0.05, 0, 0]
228
+ - filter: up_proj
229
+ value: [0, 0, 0.05, 0.05, 0.07, 0.1, 0.07, 0.05, 0.05, 0, 0]
230
+ - filter: gate_proj
231
+ value: [0, 0, 0.05, 0.05, 0.07, 0.1, 0.07, 0.05, 0.05, 0, 0]
232
+ - filter: down_proj
233
+ value: [0, 0, 0.05, 0.05, 0.07, 0.1, 0.07, 0.05, 0.05, 0, 0]
234
+ - value: 0.1
235
+ lambda: 1.0
236
+ dtype: float16
237
+ tokenizer_source: base
238
+ ```