Casual-Autopsy
commited on
Commit
•
f0c7f36
1
Parent(s):
e092060
Update README.md
Browse files
README.md
CHANGED
@@ -37,9 +37,12 @@ base_model:
|
|
37 |
| <img src="https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/resolve/main/Card-Assets/NovaKid-Girl.jpeg" width="50%" height="50%" style="display: block; margin: auto;"> |
|
38 |
|:---:|
|
39 |
| Image generated by [mayonays_on_toast](https://civitai.com/user/mayonays_on_toast) - [Sauce](https://civitai.com/images/10153472) |
|
40 |
-
|
|
|
|
|
41 |
# L3-Super-Nova-RP-8B
|
42 |
|
|
|
43 |
***
|
44 |
## Presets
|
45 |
I've(or anyone else) yet to find good Textgen Preset so here's the starting point preset I use instead, It should get you by for now.
|
@@ -64,6 +67,8 @@ Dynamic Temperature:
|
|
64 |
Max Temp: 1.25
|
65 |
Exponent: 0.85
|
66 |
```
|
|
|
|
|
67 |
***
|
68 |
## Usage Info
|
69 |
|
@@ -71,9 +76,11 @@ Some of the **INT** models were chosen with some of SillyTavern's features in mi
|
|
71 |
|
72 |
While not required, I'd recommend building the story string prompt with Lorebooks rather than using the Advance Formatting menu. The only thing you really need in the Story String prompt within Advance Formatting is the system prompt. Doing it this way tends to keep the character more consistent as the RP goes on as all character card info is locked to a certain depth rather than getting further and further away within the context.
|
73 |
|
|
|
74 |
***
|
75 |
## Quants
|
76 |
|
|
|
77 |
***
|
78 |
## Merge Info
|
79 |
|
@@ -83,6 +90,7 @@ The model was finished off with both **Merge Densification**, and **Negative Wei
|
|
83 |
|
84 |
All merging steps had the merge calculations done in **float32** and were output as **bfloat16**.
|
85 |
|
|
|
86 |
### Models Merged
|
87 |
|
88 |
The following models were used to make this merge:
|
@@ -103,9 +111,11 @@ The following models were used to make this merge:
|
|
103 |
* [lighteternal/Llama3-merge-biomed-8b](https://huggingface.co/lighteternal/Llama3-merge-biomed-8b)
|
104 |
* [Casual-Autopsy/Llama3-merge-psychotherapy-8b](https://huggingface.co/Casual-Autopsy/Llama3-merge-psychotherapy-8b)
|
105 |
|
|
|
106 |
***
|
107 |
## Evaluation Results
|
108 |
|
|
|
109 |
### [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
110 |
|
111 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Casual-Autopsy__L3-Umbral-Mind-RP-v2.0-8B)
|
@@ -123,6 +133,7 @@ The rest don't matter. At least not nearly as much as IFEval.
|
|
123 |
|MuSR (0-shot) |N/A|
|
124 |
|MMLU-PRO (5-shot) |N/A|
|
125 |
|
|
|
126 |
### [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
|
127 |
|
128 |
Information about the metrics can be found at the bottom of the [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) in the respective tabs.
|
@@ -137,77 +148,90 @@ Information about the metrics can be found at the bottom of the [UGI Leaderboard
|
|
137 |
|Writing |N/A|N/A|Yule |
|
138 |
|PolContro |N/A| | |
|
139 |
|
|
|
140 |
***
|
141 |
## Secret Sauce
|
142 |
|
143 |
The following YAML configs were used to make this merge.
|
144 |
|
|
|
145 |
### Super-Nova-CRE_pt.1
|
146 |
|
147 |
```yaml
|
148 |
|
149 |
```
|
150 |
|
|
|
151 |
### Super-Nova-CRE_pt.2
|
152 |
|
153 |
```yaml
|
154 |
|
155 |
```
|
156 |
|
|
|
157 |
### Super-Nova-UNC_pt.1
|
158 |
|
159 |
```yaml
|
160 |
|
161 |
```
|
162 |
|
|
|
163 |
### Super-Nova-UNC_pt.2
|
164 |
|
165 |
```yaml
|
166 |
|
167 |
```
|
168 |
|
|
|
169 |
### Super-Nova-INT_pt.1
|
170 |
|
171 |
```yaml
|
172 |
|
173 |
```
|
174 |
|
|
|
175 |
### Super-Nova-INT_pt.2
|
176 |
|
177 |
```yaml
|
178 |
|
179 |
```
|
180 |
|
|
|
181 |
### Super-Nova-CRE
|
182 |
|
183 |
```yaml
|
184 |
|
185 |
```
|
186 |
|
|
|
187 |
### Super-Nova-UNC
|
188 |
|
189 |
```yaml
|
190 |
|
191 |
```
|
192 |
|
|
|
193 |
### Super-Nova-INT
|
194 |
|
195 |
```yaml
|
196 |
|
197 |
```
|
198 |
|
|
|
199 |
### Super-Nova-RP_pt.1
|
200 |
|
201 |
```yaml
|
202 |
|
203 |
```
|
204 |
|
|
|
205 |
### Super-Nova-RP_pt.2
|
206 |
|
207 |
```yaml
|
208 |
|
209 |
```
|
210 |
|
|
|
211 |
### L3-Super-Nova-RP-8B
|
212 |
|
213 |
```yaml
|
|
|
37 |
| <img src="https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/resolve/main/Card-Assets/NovaKid-Girl.jpeg" width="50%" height="50%" style="display: block; margin: auto;"> |
|
38 |
|:---:|
|
39 |
| Image generated by [mayonays_on_toast](https://civitai.com/user/mayonays_on_toast) - [Sauce](https://civitai.com/images/10153472) |
|
40 |
+
***
|
41 |
+
***
|
42 |
+
***
|
43 |
# L3-Super-Nova-RP-8B
|
44 |
|
45 |
+
***
|
46 |
***
|
47 |
## Presets
|
48 |
I've(or anyone else) yet to find good Textgen Preset so here's the starting point preset I use instead, It should get you by for now.
|
|
|
67 |
Max Temp: 1.25
|
68 |
Exponent: 0.85
|
69 |
```
|
70 |
+
|
71 |
+
***
|
72 |
***
|
73 |
## Usage Info
|
74 |
|
|
|
76 |
|
77 |
While not required, I'd recommend building the story string prompt with Lorebooks rather than using the Advance Formatting menu. The only thing you really need in the Story String prompt within Advance Formatting is the system prompt. Doing it this way tends to keep the character more consistent as the RP goes on as all character card info is locked to a certain depth rather than getting further and further away within the context.
|
78 |
|
79 |
+
***
|
80 |
***
|
81 |
## Quants
|
82 |
|
83 |
+
***
|
84 |
***
|
85 |
## Merge Info
|
86 |
|
|
|
90 |
|
91 |
All merging steps had the merge calculations done in **float32** and were output as **bfloat16**.
|
92 |
|
93 |
+
***
|
94 |
### Models Merged
|
95 |
|
96 |
The following models were used to make this merge:
|
|
|
111 |
* [lighteternal/Llama3-merge-biomed-8b](https://huggingface.co/lighteternal/Llama3-merge-biomed-8b)
|
112 |
* [Casual-Autopsy/Llama3-merge-psychotherapy-8b](https://huggingface.co/Casual-Autopsy/Llama3-merge-psychotherapy-8b)
|
113 |
|
114 |
+
***
|
115 |
***
|
116 |
## Evaluation Results
|
117 |
|
118 |
+
***
|
119 |
### [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
120 |
|
121 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Casual-Autopsy__L3-Umbral-Mind-RP-v2.0-8B)
|
|
|
133 |
|MuSR (0-shot) |N/A|
|
134 |
|MMLU-PRO (5-shot) |N/A|
|
135 |
|
136 |
+
***
|
137 |
### [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
|
138 |
|
139 |
Information about the metrics can be found at the bottom of the [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) in the respective tabs.
|
|
|
148 |
|Writing |N/A|N/A|Yule |
|
149 |
|PolContro |N/A| | |
|
150 |
|
151 |
+
***
|
152 |
***
|
153 |
## Secret Sauce
|
154 |
|
155 |
The following YAML configs were used to make this merge.
|
156 |
|
157 |
+
***
|
158 |
### Super-Nova-CRE_pt.1
|
159 |
|
160 |
```yaml
|
161 |
|
162 |
```
|
163 |
|
164 |
+
***
|
165 |
### Super-Nova-CRE_pt.2
|
166 |
|
167 |
```yaml
|
168 |
|
169 |
```
|
170 |
|
171 |
+
***
|
172 |
### Super-Nova-UNC_pt.1
|
173 |
|
174 |
```yaml
|
175 |
|
176 |
```
|
177 |
|
178 |
+
***
|
179 |
### Super-Nova-UNC_pt.2
|
180 |
|
181 |
```yaml
|
182 |
|
183 |
```
|
184 |
|
185 |
+
***
|
186 |
### Super-Nova-INT_pt.1
|
187 |
|
188 |
```yaml
|
189 |
|
190 |
```
|
191 |
|
192 |
+
***
|
193 |
### Super-Nova-INT_pt.2
|
194 |
|
195 |
```yaml
|
196 |
|
197 |
```
|
198 |
|
199 |
+
***
|
200 |
### Super-Nova-CRE
|
201 |
|
202 |
```yaml
|
203 |
|
204 |
```
|
205 |
|
206 |
+
***
|
207 |
### Super-Nova-UNC
|
208 |
|
209 |
```yaml
|
210 |
|
211 |
```
|
212 |
|
213 |
+
***
|
214 |
### Super-Nova-INT
|
215 |
|
216 |
```yaml
|
217 |
|
218 |
```
|
219 |
|
220 |
+
***
|
221 |
### Super-Nova-RP_pt.1
|
222 |
|
223 |
```yaml
|
224 |
|
225 |
```
|
226 |
|
227 |
+
***
|
228 |
### Super-Nova-RP_pt.2
|
229 |
|
230 |
```yaml
|
231 |
|
232 |
```
|
233 |
|
234 |
+
***
|
235 |
### L3-Super-Nova-RP-8B
|
236 |
|
237 |
```yaml
|