tags:
- merge
- mergekit
- lazymergekit
- not-for-all-audiences
- nsfw
- rp
- roleplay
- role-play
license: llama3
language:
- en
library_name: transformers
pipeline_tag: text-generation
base_model:
- Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
- Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
- tannedbum/L3-Nymeria-8B
- migtissera/Llama-3-8B-Synthia-v3.5
- Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- tannedbum/L3-Nymeria-Maid-8B
- Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
- aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
- Sao10K/L3-8B-Stheno-v3.1
Merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
- Mental illness
- Self-harm
- Trauma
- Suicide
I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes, but thanks to failspy/Llama-3-8B-Instruct-MopeyMule this problem has been lessened considerably.
If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
Usage Info
This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
Quants
Merge Method
This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge, followed by another Task Arithmetic merge with a model containing psychology data.
Models Merged
The following models were included in the merge:
- Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
- Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
- tannedbum/L3-Nymeria-8B
- migtissera/Llama-3-8B-Synthia-v3.5
- Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- tannedbum/L3-Nymeria-Maid-8B
- Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
- aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
- Sao10K/L3-8B-Stheno-v3.1
Secret Sauce
The following YAML configurations were used to produce this model:
Umbral-1
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
density: 0.45
weight: 0.4
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
density: 0.65
weight: 0.1
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
int8_mask: true
dtype: bfloat16
Umbral-2
models:
- model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
- model: tannedbum/L3-Nymeria-8B
parameters:
density: 0.45
weight: 0.25
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
density: 0.65
weight: 0.25
merge_method: dare_ties
base_model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
parameters:
int8_mask: true
dtype: bfloat16
Umbral-3
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- model: tannedbum/L3-Nymeria-Maid-8B
parameters:
density: 0.4
weight: 0.3
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
density: 0.6
weight: 0.2
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
int8_mask: true
dtype: bfloat16
Mopey-Omelette
models:
- model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
weight: 0.15
merge_method: task_arithmetic
base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
dtype: bfloat16
Umbral-Mind-1
models:
- model: Casual-Autopsy/Umbral-1
- model: Casual-Autopsy/Umbral-3
merge_method: slerp
base_model: Casual-Autopsy/Umbral-1
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
embed_slerp: true
dtype: bfloat16
Umbral-Mind-2
models:
- model: Casual-Autopsy/Umbral-Mind-1
- model: Casual-Autopsy/Umbral-2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-1
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
embed_slerp: true
dtype: bfloat16
Umbral-Mind-3
models:
- model: Casual-Autopsy/Umbral-Mind-2
- model: Casual-Autopsy/Mopey-Omelette
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-2
parameters:
t:
- value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
embed_slerp: true
dtype: bfloat16
L3-Umbral-Mind-RP-v2.0-8B
models:
- model: Casual-Autopsy/Umbral-Mind-3
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
weight: 0.04
- model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
parameters:
weight: 0.02
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
parameters:
weight: 0.02
- model: Sao10K/L3-8B-Stheno-v3.1
parameters:
weight: 0.01
merge_method: task_arithmetic
base_model: Casual-Autopsy/Umbral-Mind-3
dtype: bfloat16