Casual-Autopsy's picture
Update README.md
b5c32a9 verified
---
base_model:
- Sao10K/L3-8B-Stheno-v3.2
- Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- migtissera/Llama-3-8B-Synthia-v3.5
- tannedbum/L3-Nymeria-Maid-8B
- Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- tannedbum/L3-Nymeria-8B
- ChaoticNeutrals/Hathor_RP-v.01-L3-8B
- ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
- Casual-Autopsy/Omelette-2
- cgato/L3-TheSpice-8b-v0.8.3
- ChaoticNeutrals/Hathor_RP-v.01-L3-8B
- Sao10K/L3-8B-Stheno-v3.1
- aifeifei798/llama3-8B-DarkIdol-1.0
- ResplendentAI/Nymph_8B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
- nsfw
- rp
- roleplay
- role-play
license: llama3
language:
- en
---
# L3-Uncen-Merger-Omelette-RP-v0.1-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
A merger recipe inspired by [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) combined with a merger technique known as merge densification( [grimjim/kunoichi-lemon-royale-v3-32K-7B](https://huggingface.co/grimjim/kunoichi-lemon-royale-v3-32K-7B) )
The model recipe ended up being something I can only describe as making an omelette. Hence the model name.
The models are scrambled with Dare Ties to induce a bit of randomness, then the Dare Ties merges are merged into themselves with SLERP to repair any holes cause by Dare Ties, and finally a bunch of high creativity models are thrown into the merger through merge densification(Task Arithmetic).
This model uses a bunch of the top models of the UGI Leaderboard, I picked out a few of the top 8B models of each category. Most of the high creativity models in the last step were found through Lewdiculus' account uploads
**Downgraded to Stheno v3.2 due to issues with the model**
### Merge Method
Dare Ties, SLERP, and Task Arithmetic
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
* [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
* [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
* [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
* [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
* [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co/ChaoticNeutrals/Hathor_RP-v.01-L3-8B)
* [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co/ChaoticNeutrals/Hathor_RP-v.01-L3-8B)
* [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
* [aifeifei798/llama3-8B-DarkIdol-1.0](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0)
* [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
### Quants
[Static quants](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF) by mradermacher
## Secret Sauce
The following YAML configurations were used to produce this model:
### Scrambled-Egg-1
```yaml
models:
- model: Sao10K/L3-8B-Stheno-v3.2
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
density: 0.45
weight: 0.33
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
density: 0.75
weight: 0.33
merge_method: dare_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
int8_mask: true
dtype: bfloat16
```
### Scrambled-Egg-2
```yaml
models:
- model: [Unreleased psychology model]
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
density: 0.35
weight: 0.25
- model: tannedbum/L3-Nymeria-Maid-8B
parameters:
density: 0.65
weight: 0.25
merge_method: dare_ties
base_model: [Unreleased psychology model]
parameters:
int8_mask: true
dtype: bfloat16
```
### Scrambled-Egg-3
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- model: tannedbum/L3-Nymeria-8B
parameters:
density: 0.5
weight: 0.35
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
parameters:
density: 0.4
weight: 0.2
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
int8_mask: true
dtype: bfloat16
```
### Omelette-1
```yaml
models:
- model: Casual-Autopsy/Scrambled-Egg-1
- model: Casual-Autopsy/Scrambled-Egg-3
merge_method: slerp
base_model: Casual-Autopsy/Scrambled-Egg-1
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
embed_slerp: true
dtype: bfloat16
```
### Omelette-2
```yaml
models:
- model: Casual-Autopsy/Omelette-1
- model: Casual-Autopsy/Scrambled-Egg-2
merge_method: slerp
base_model: Casual-Autopsy/Omelette-1
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
embed_slerp: true
dtype: bfloat16
```
### L3-Uncen-Merger-Omelette-RP-v0.1-8B
```yaml
models:
- model: Casual-Autopsy/Omelette-2
- model: ResplendentAI/Nymph_8B
parameters:
weight: 0.01
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
parameters:
weight: 0.01
- model: Sao10K/L3-8B-Stheno-v3.1
parameters:
weight: 0.015
- model: aifeifei798/llama3-8B-DarkIdol-1.0
parameters:
weight: 0.015
- model: cgato/L3-TheSpice-8b-v0.8.3
parameters:
weight: 0.02
- model: ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
parameters:
weight: 0.02
merge_method: task_arithmetic
base_model: Casual-Autopsy/Omelette-2
dtype: bfloat16
```