Casual-Autopsy commited on
Commit
9ab1f04
1 Parent(s): b10c943

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -5
README.md CHANGED
@@ -123,13 +123,16 @@ model-index:
123
  name: Open LLM Leaderboard
124
  ---
125
 
126
- <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;">
127
- Image by ろ47
 
 
128
 
129
  # Merge
130
 
131
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
132
 
 
133
  ## Merge Details
134
 
135
  The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
@@ -143,11 +146,13 @@ but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/fai
143
 
144
  If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
145
 
146
- ### Usage Info
 
147
 
148
  This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
149
 
150
- ### Quants
 
151
 
152
  * [imatrix quants](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2.0-8B-i1-GGUF) by mradermacher
153
  * [Static quants](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2.0-8B-GGUF) by mradermacher
@@ -157,7 +162,8 @@ This model is meant to be used with asterisks/quotes RPing formats, any other fo
157
  - [L3-Umbral-Mind-RP-v2.0-8B-6.3bpw-h8-exl2](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B-6.3bpw-h8-exl2) by yours truly
158
  - [L3-Umbral-Mind-RP-v2.0-8B-5.3bpw-h6-exl2](https://huggingface.co/riveRiPH/L3-Umbral-Mind-RP-v2.0-8B-5.3bpw-h6-exl2) by riveRiPH
159
 
160
- ### Merge Method
 
161
 
162
  This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge, followed by another Task Arithmetic merge with a model containing psychology data.
163
 
@@ -177,6 +183,7 @@ The following models were included in the merge:
177
  * [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
178
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
179
 
 
180
  ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
181
 
182
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Casual-Autopsy__L3-Umbral-Mind-RP-v2.0-8B)
@@ -194,6 +201,7 @@ The rest don't matter. At least not nearly as much as IFEval.
194
  |MuSR (0-shot) | 5.55|
195
  |MMLU-PRO (5-shot) |30.26|
196
 
 
197
  ## Secret Sauce
198
 
199
  The following YAML configurations were used to produce this model:
 
123
  name: Open LLM Leaderboard
124
  ---
125
 
126
+ | <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;"> |
127
+ |:---:|
128
+ | Image by ろ47 |
129
+ | |
130
 
131
  # Merge
132
 
133
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
134
 
135
+ ***
136
  ## Merge Details
137
 
138
  The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
 
146
 
147
  If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
148
 
149
+ ***
150
+ ## Usage Info
151
 
152
  This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
153
 
154
+ ***
155
+ ## Quants
156
 
157
  * [imatrix quants](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2.0-8B-i1-GGUF) by mradermacher
158
  * [Static quants](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2.0-8B-GGUF) by mradermacher
 
162
  - [L3-Umbral-Mind-RP-v2.0-8B-6.3bpw-h8-exl2](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B-6.3bpw-h8-exl2) by yours truly
163
  - [L3-Umbral-Mind-RP-v2.0-8B-5.3bpw-h6-exl2](https://huggingface.co/riveRiPH/L3-Umbral-Mind-RP-v2.0-8B-5.3bpw-h6-exl2) by riveRiPH
164
 
165
+ ***
166
+ ## Merge Method
167
 
168
  This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge, followed by another Task Arithmetic merge with a model containing psychology data.
169
 
 
183
  * [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
184
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
185
 
186
+ ***
187
  ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
188
 
189
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Casual-Autopsy__L3-Umbral-Mind-RP-v2.0-8B)
 
201
  |MuSR (0-shot) | 5.55|
202
  |MMLU-PRO (5-shot) |30.26|
203
 
204
+ ***
205
  ## Secret Sauce
206
 
207
  The following YAML configurations were used to produce this model: