Casual-Autopsy commited on
Commit
3380b02
1 Parent(s): 8bb897c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md CHANGED
@@ -37,4 +37,145 @@ base_model:
37
  | <img src="https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/resolve/main/Card-Assets/NovaKid-Girl.jpeg" width="50%" height="50%" style="display: block; margin: auto;"> |
38
  |:---:|
39
  | Image generated by [mayonays_on_toast](https://civitai.com/user/mayonays_on_toast) - [Sauce](https://civitai.com/images/10153472) |
 
 
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  | <img src="https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/resolve/main/Card-Assets/NovaKid-Girl.jpeg" width="50%" height="50%" style="display: block; margin: auto;"> |
38
  |:---:|
39
  | Image generated by [mayonays_on_toast](https://civitai.com/user/mayonays_on_toast) - [Sauce](https://civitai.com/images/10153472) |
40
+ |---|
41
+ # L3-Super-Nova-RP-8B
42
 
43
+ ***
44
+ ## Presets
45
+ I've(or anyone else) yet to find good Textgen Preset so here's the starting point preset I use instead, It should get you by for now.
46
+ ```yaml
47
+ Top K: 50
48
+ Top P: 0.85
49
+ Repetition Penalty: 1.01
50
+ # Don't make this higher, DRY handles the bulk of Squashing Repetition.
51
+ # This is justs to lightly nudge the bot to move the plot forward
52
+ Rep Pen Range: 2048 # Don't make this higher either.
53
+ Presence Penalty: 0.03 # Minor encouragement to use synonyms.
54
+ Smoothing Factor: 0.3
55
+
56
+ DRY Repetition Penalty:
57
+ Multiplier: 0.8
58
+ Base: 1.75
59
+ Allowed Length: 2
60
+ Penalty Range: 4096
61
+
62
+ Dynamic Temperature:
63
+ Min Temp: 0.5
64
+ Max Temp: 1.25
65
+ Exponent: 0.85
66
+ ```
67
+ ***
68
+ ## Usage Info
69
+
70
+ Some of the **INT** models were chosen with some of SillyTavern's features in mind, such as emotion based sprites, dynamic music, and pretty much any feature, extension, or STscript that uses sumarization. With that said, it's recommended to use SillyTavern as your front-end.
71
+
72
+ While not required, I'd recommend building the story string prompt with Lorebooks rather than using the Advance Formatting menu. The only thing you really need in the Story String prompt within Advance Formatting is the system prompt. Doing it this way tends to keep the character more consistent as the RP goes on as all character card info is locked to a certain depth rather than getting further and further away within the context.
73
+
74
+ ***
75
+ ## Quants
76
+
77
+ ***
78
+ ## Merge Info
79
+
80
+ The merge methods used were **Ties**, **Dare Ties**, **Breadcrumbs Ties**, **SLERP**, and **Task Arithmetic**.
81
+
82
+ The model was finished off with both **Merge Densification**, and **Negative Weighting** tenchniques to boost creativity.
83
+
84
+ All merging steps had the merge calculations done in **float32** and were output as **bfloat16**.
85
+
86
+ ### Models Merged
87
+
88
+ The following models were used to make this merge:
89
+ * [nothingiisreal/L3-8B-Celeste-v1](https://huggingface.co/nothingiisreal/L3-8B-Celeste-v1)
90
+ * [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85)
91
+ * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
92
+ * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B)
93
+ * [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
94
+ * [turboderp/llama3-turbcat-instruct-8b](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b)
95
+ * [ChaoticNeutrals/Domain-Fusion-L3-8B](https://huggingface.co/ChaoticNeutrals/Domain-Fusion-L3-8B)
96
+ * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
97
+ * [TheDrummer/Llama-3SOME-8B-v2](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2)
98
+ * [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co/ChaoticNeutrals/Hathor_RP-v.01-L3-8B)
99
+ * [TheSkullery/llama-3-cat-8b-instruct-v1](https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1)
100
+ * [FPHam/L3-8B-Everything-COT](https://huggingface.co/FPHam/L3-8B-Everything-COT)
101
+ * [Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged](https://huggingface.co/Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged)
102
+ * [OEvortex/Emotional-llama-8B](https://huggingface.co/OEvortex/Emotional-llama-8B)
103
+ * [lighteternal/Llama3-merge-biomed-8b](https://huggingface.co/lighteternal/Llama3-merge-biomed-8b)
104
+ * [Casual-Autopsy/Llama3-merge-psychotherapy-8b](https://huggingface.co/Casual-Autopsy/Llama3-merge-psychotherapy-8b)
105
+
106
+ ***
107
+ ## Secret Sauce
108
+
109
+ The following YAML configs were used to make this model.
110
+
111
+ ### Super-Nova-CRE_pt.1
112
+
113
+ ```yaml
114
+
115
+ ```
116
+
117
+ ### Super-Nova-CRE_pt.2
118
+
119
+ ```yaml
120
+
121
+ ```
122
+
123
+ ### Super-Nova-UNC_pt.1
124
+
125
+ ```yaml
126
+
127
+ ```
128
+
129
+ ### Super-Nova-UNC_pt.2
130
+
131
+ ```yaml
132
+
133
+ ```
134
+
135
+ ### Super-Nova-INT_pt.1
136
+
137
+ ```yaml
138
+
139
+ ```
140
+
141
+ ### Super-Nova-INT_pt.2
142
+
143
+ ```yaml
144
+
145
+ ```
146
+
147
+ ### Super-Nova-CRE
148
+
149
+ ```yaml
150
+
151
+ ```
152
+
153
+ ### Super-Nova-UNC
154
+
155
+ ```yaml
156
+
157
+ ```
158
+
159
+ ### Super-Nova-INT
160
+
161
+ ```yaml
162
+
163
+ ```
164
+
165
+ ### Super-Nova-RP_pt.1
166
+
167
+ ```yaml
168
+
169
+ ```
170
+
171
+ ### Super-Nova-RP_pt.2
172
+
173
+ ```yaml
174
+
175
+ ```
176
+
177
+ ### L3-Super-Nova-RP-8B
178
+
179
+ ```yaml
180
+
181
+ ```