File size: 2,326 Bytes
06476c1
89a5313
 
 
 
 
 
 
 
 
 
 
 
 
 
06476c1
89a5313
 
bff3a58
89a5313
 
 
 
e958c6c
89a5313
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
06476c1
89a5313
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
base_model:
- beomi/Llama-3-Open-Ko-8B
- aaditya/Llama3-OpenBioLLM-8B
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
- Locutusque/llama-3-neural-chat-v2.2-8B
- asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B
library_name: transformers
tags:
- mergekit
- merge

---
# U-GO-GIRL-Llama-3-KoEn-8B

<a href="https://ibb.co/cr8X8zd"><img src="https://i.ibb.co/Tg0q0z5/ugoo.png" alt="ugoo" border="0"></a>

### Models Merged

The following models were included in the merge:
* [asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B](https://huggingface.co/asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B)
* [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
* [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [Locutusque/llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: NousResearch/Meta-Llama-3-8B
    # Base model providing a general foundation without specific parameters
  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.65  
      weight: 0.4  
  
  - model: asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B
    parameters:
      density: 0.6  
      weight: 0.3 

  - model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
    parameters:
      density: 0.55  
      weight: 0.1 
  
  - model: beomi/Llama-3-Open-Ko-8B
    parameters:
      density: 0.55  
      weight: 0.1

  - model: MLP-KTLim/llama-3-Korean-Bllossom-8B
    parameters:
      density: 0.55  
      weight: 0.1 
  
  - model: aaditya/Llama3-OpenBioLLM-8B
    parameters:
      density: 0.55  
      weight: 0.05

  - model: Locutusque/llama-3-neural-chat-v2.2-8B
    parameters:
      density: 0.55  
      weight: 0.05 
  
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16

```