--- base_model: - SicariusSicariiStuff/Dusk_Rainbow - ArliAI/ArliAI-Llama-3-8B-Formax-v1.0 - Sao10K/L3-8B-Stheno-v3.2 - Nitral-AI/Hathor_Sofit-L3-8B-v1 - TheDrummer/Llama-3SOME-8B-v2 - hf-100/Llama-3-Spellbound-Instruct-8B-0.3 license: llama3 library_name: transformers tags: - nsfw - not-for-all-audiences - llama-3 - text-generation-inference - mergekit - merge --- # Llama-3-8B-Stroganoff-4.0-Version-B # Details - **License**: [llama3](https://llama.meta.com/llama3/license/) - **Instruct Format**: [llama-3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) or ChatML - **Context Size**: 8K ## Models Used - [Dusk_Rainbow](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) - [ArliAI-Llama-3-8B-Formax-v1.0](https://huggingface.co/Nitral-AI/ArliAI/ArliAI-Llama-3-8B-Formax-v1.0) - [L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) - [Hathor_Sofit-L3-8B-v1](https://huggingface.co/Nitral-AI/Hathor_Sofit-L3-8B-v1) - [Llama-3SOME-8B-v2](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2) - [Llama-3-Spellbound-Instruct-8B-0.3](https://huggingface.co/hf-100/Llama-3-Spellbound-Instruct-8B-0.3) ## Merge Config ```yaml merge_method: della_linear dtype: bfloat16 parameters: normalize: true int8_mask: true tokenizer_source: union base_model: SicariusSicariiStuff/Dusk_Rainbow models: - model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0 parameters: density: 0.55 weight: 1 - model: Sao10K/L3-8B-Stheno-v3.2 parameters: density: 0.55 weight: 1 - model: Nitral-AI/Hathor_Sofit-L3-8B-v1 parameters: density: 0.55 weight: 1 - model: TheDrummer/Llama-3SOME-8B-v2 parameters: density: 0.55 weight: 1 - model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 parameters: density: 0.55 weight: 1 ```