L3.1-Noraian / README.md
djuna's picture
Upload folder using huggingface_hub
8bfb9ae verified
---
base_model:
- djuna/L3.1-Romes-Ninomos
- DreadPoor/Aurora_faustus-8B-LINEAR
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
- ValiantLabs/Llama3.1-8B-ShiningValiant2
- v000000/L3.1-Storniitova-8B
- vicgalle/Configurable-Hermes-3-Llama-3.1-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della merge method using [vicgalle/Configurable-Hermes-3-Llama-3.1-8B](https://huggingface.co/vicgalle/Configurable-Hermes-3-Llama-3.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [djuna/L3.1-Romes-Ninomos](https://huggingface.co/djuna/L3.1-Romes-Ninomos)
* [DreadPoor/Aurora_faustus-8B-LINEAR](https://huggingface.co/DreadPoor/Aurora_faustus-8B-LINEAR) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
* [ValiantLabs/Llama3.1-8B-ShiningValiant2](https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2)
* [v000000/L3.1-Storniitova-8B](https://huggingface.co/v000000/L3.1-Storniitova-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: della
dtype: bfloat16
tokenizer_source: union
parameters:
lambda: 0.78
epsilon: 0.1
normalize: false
base_model: vicgalle/Configurable-Hermes-3-Llama-3.1-8B
models:
- model: ValiantLabs/Llama3.1-8B-ShiningValiant2
parameters:
weight: 0.2
density: 0.5
- model: djuna/L3.1-Romes-Ninomos
parameters:
weight: 0.15
density: 0.55
- model: DreadPoor/Aurora_faustus-8B-LINEAR+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
weight: 0.14
density: 0.56
- model: v000000/L3.1-Storniitova-8B
parameters:
weight: 0.2
density: 0.5
```