metadata
base_model:
- saltlux/Ko-Llama3-Luxia-8B
- allganize/Llama-3-Alpha-Ko-8B-Instruct
- nayohan/llama3-instrucTrans-enko-8b
- NousResearch/Meta-Llama-3-8B
- asiansoul/U-GO-GIRL-Llama-3-KoEn-8B
- rombodawg/Llama-3-8B-Instruct-Coder
- NousResearch/Hermes-2-Theta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
U-GO-GIRL-Remix-Llama-3-KoEn-8B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.
Models Merged
The following models were included in the merge:
- saltlux/Ko-Llama3-Luxia-8B
- allganize/Llama-3-Alpha-Ko-8B-Instruct
- nayohan/llama3-instrucTrans-enko-8b
- asiansoul/U-GO-GIRL-Llama-3-KoEn-8B
- rombodawg/Llama-3-8B-Instruct-Coder
- NousResearch/Hermes-2-Theta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
Configuration
The following YAML configuration was used to produce this model:
models:
- model: NousResearch/Meta-Llama-3-8B
# Base model providing a general foundation without specific parameters
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.65
weight: 0.4
- model: asiansoul/U-GO-GIRL-Llama-3-KoEn-8B
parameters:
density: 0.6
weight: 0.3
- model: allganize/Llama-3-Alpha-Ko-8B-Instruct
parameters:
density: 0.55
weight: 0.1
- model: saltlux/Ko-Llama3-Luxia-8B
parameters:
density: 0.55
weight: 0.1
- model: nayohan/llama3-instrucTrans-enko-8b
parameters:
density: 0.55
weight: 0.1
- model: rombodawg/Llama-3-8B-Instruct-Coder
parameters:
density: 0.55
weight: 0.05
- model: NousResearch/Hermes-2-Theta-Llama-3-8B
parameters:
density: 0.55
weight: 0.05
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16