asiansoul commited on
Commit
643911c
1 Parent(s): 5876fdc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -58
README.md CHANGED
@@ -16,13 +16,6 @@ tags:
16
  ---
17
  # U-GO-GIRL-Remix-Llama-3-KoEn-8B
18
 
19
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
20
-
21
- ## Merge Details
22
- ### Merge Method
23
-
24
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
25
-
26
  ### Models Merged
27
 
28
  The following models were included in the merge:
@@ -33,54 +26,3 @@ The following models were included in the merge:
33
  * [rombodawg/Llama-3-8B-Instruct-Coder](https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder)
34
  * [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
35
  * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
36
-
37
- ### Configuration
38
-
39
- The following YAML configuration was used to produce this model:
40
-
41
- ```yaml
42
- models:
43
- - model: NousResearch/Meta-Llama-3-8B
44
- # Base model providing a general foundation without specific parameters
45
- - model: NousResearch/Meta-Llama-3-8B-Instruct
46
- parameters:
47
- density: 0.65
48
- weight: 0.4
49
-
50
- - model: asiansoul/U-GO-GIRL-Llama-3-KoEn-8B
51
- parameters:
52
- density: 0.6
53
- weight: 0.3
54
-
55
- - model: allganize/Llama-3-Alpha-Ko-8B-Instruct
56
- parameters:
57
- density: 0.55
58
- weight: 0.1
59
-
60
- - model: saltlux/Ko-Llama3-Luxia-8B
61
- parameters:
62
- density: 0.55
63
- weight: 0.1
64
-
65
- - model: nayohan/llama3-instrucTrans-enko-8b
66
- parameters:
67
- density: 0.55
68
- weight: 0.1
69
-
70
- - model: rombodawg/Llama-3-8B-Instruct-Coder
71
- parameters:
72
- density: 0.55
73
- weight: 0.05
74
-
75
- - model: NousResearch/Hermes-2-Theta-Llama-3-8B
76
- parameters:
77
- density: 0.55
78
- weight: 0.05
79
-
80
- merge_method: dare_ties
81
- base_model: NousResearch/Meta-Llama-3-8B
82
- parameters:
83
- int8_mask: true
84
- dtype: bfloat16
85
-
86
- ```
 
16
  ---
17
  # U-GO-GIRL-Remix-Llama-3-KoEn-8B
18
 
 
 
 
 
 
 
 
19
  ### Models Merged
20
 
21
  The following models were included in the merge:
 
26
  * [rombodawg/Llama-3-8B-Instruct-Coder](https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder)
27
  * [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
28
  * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)