L3.1-RPganoff-8B-B / README.md
djuna's picture
Upload folder using huggingface_hub
0af6d58 verified
---
base_model:
- HiroseKoichi/Llama-3-8B-Stroganoff-4.0
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) as a base.
### Models Merged
The following models were included in the merge:
* [HiroseKoichi/Llama-3-8B-Stroganoff-4.0](https://huggingface.co/HiroseKoichi/Llama-3-8B-Stroganoff-4.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
tokenizer_source: base
merge_method: della_linear
base_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
models:
- model: HiroseKoichi/Llama-3-8B-Stroganoff-4.0
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
density: 0.5
epsilon: 0.05
lambda: 1.0
```