metadata
base_model:
- aifeifei798/llama3-8B-DarkIdol-1.0
- ChaoticNeutrals/Hathor_RP-v.01-L3-8B
- Casual-Autopsy/Omelette-2
- Sao10K/L3-8B-Stheno-v3.1
- ResplendentAI/Nymph_8B
- cgato/L3-TheSpice-8b-v0.8.3
- ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using Casual-Autopsy/Omelette-2 as a base.
Models Merged
The following models were included in the merge:
- aifeifei798/llama3-8B-DarkIdol-1.0
- ChaoticNeutrals/Hathor_RP-v.01-L3-8B
- Sao10K/L3-8B-Stheno-v3.1
- ResplendentAI/Nymph_8B
- cgato/L3-TheSpice-8b-v0.8.3
- ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Casual-Autopsy/Omelette-2
- model: ResplendentAI/Nymph_8B
parameters:
weight: 0.01
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
parameters:
weight: 0.01
- model: Sao10K/L3-8B-Stheno-v3.1
parameters:
weight: 0.015
- model: aifeifei798/llama3-8B-DarkIdol-1.0
parameters:
weight: 0.015
- model: cgato/L3-TheSpice-8b-v0.8.3
parameters:
weight: 0.02
- model: ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
parameters:
weight: 0.02
merge_method: task_arithmetic
base_model: Casual-Autopsy/Omelette-2
dtype: bfloat16