File size: 3,379 Bytes
a3081aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88fc9ef
 
 
 
 
5e6f75e
88fc9ef
 
566e66f
 
 
a3081aa
 
9c0198e
 
 
 
 
 
a3081aa
 
 
 
9c0198e
 
a3081aa
 
 
 
 
 
 
b9dbc36
 
 
 
 
 
 
 
 
a3081aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f11a85a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
base_model:
- grimjim/llama-3-merge-virt-req-8B
library_name: transformers
pipeline_tag: text-generation
tags:
- mergekit
- merge
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE

---

> [!IMPORTANT]
> Quants:<br>
> [mradermacher/Llama-3-8B-Irene-v0.2-GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.2-GGUF)<br>
> [mradermacher/Llama-3-8B-Irene-v0.2-i1-GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.2-i1-GGUF)<br>
> [Meggido/Llama-3-8B-Irene-v0.2-6.5bpw-h8-exl2](https://huggingface.co/Meggido/Llama-3-8B-Irene-v0.2-6.5bpw-h8-exl2)<br>


<img src="https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.2/resolve/main/Gnome.png">


# Llama-3-8B-Irene-v0.2

Mergin' o' models, ye say? Well, that be a task fit fer a clever gnome like meself! When combinin' similar models, I like to use model stock tae bring 'em together. And when I'm slerpin', I makes sure tae use a gradient that tapers off at both ends. That way, the model stays mostly uncensored, ye see.

Now, if I'm mergin' two uncensored models with Slerp, I just favors the one I want more o'! But when it comes tae makin' the gradient, I likes tae get wild and fluctuate between low and high values, ye know what I mean? It's like addin' a bit o' magic tae the mix, helps keep the results from gettin' too boring.

Course, this be just one gnome's way o' doin' things. I'm sure there be other clever methods out there


## Merge Details
### Merge Method

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* Mergekit/llama3-SOVL-v1
* [grimjim/llama-3-merge-virt-req-8B](https://huggingface.co/grimjim/llama-3-merge-virt-req-8B)
*  NousResearch/Meta-Llama-3-8B-Instruct
*  Locutusque/llama-3-neural-chat-v2.2-8B
*  NousResearch/Hermes-2-Pro-Llama-3-8B
*  rombodawg/Llama-3-8B-Instruct-Coder-v2
*  aaditya/Llama3-OpenBioLLM-8B
*  ResplendentAI/SOVL_Llama3_8B
*  openlynn/Llama-3-Soliloquy-8B-v2
*  grimjim/llama-3-merge-pp-instruct-8B
*  ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: grimjim/llama-3-merge-virt-req-8B
        layer_range: [0, 32]
      - model: Mergekit/llama3-SOVL-v1
        layer_range: [0, 32]
merge_method: slerp
base_model: grimjim/llama-3-merge-virt-req-8B
parameters:
  t:
    - value: [0.5, 0.35, 0.55, 0.35, 0.75, 0.35, 0.90, 0.35, 0.75, 0.35, 0.55, 0.35, 0.5]
dtype: bfloat16

```

# llama3-SOVL-v1
```
slices:
  - sources:
      - model: Mergekit/SMART-CODER
        layer_range: [0, 32]
      - model: ResplendentAI/SOVL_Llama3_8B
        layer_range: [0, 32]
merge_method: slerp
base_model: Mergekit/SMART-CODER
parameters:
  t:
    - value: [0.90, 0.55, 0.75, 0.35, 0.45, 0.90, 0.25, 0.90, 0.45, 0.35, 0.75, 0.55, 0.90]
dtype: bfloat16
```

# SMART-CODER
```
models:
  - model: NousResearch/Meta-Llama-3-8B-Instruct
  - model: Locutusque/llama-3-neural-chat-v2.2-8B
  - model: NousResearch/Hermes-2-Pro-Llama-3-8B
  - model: rombodawg/Llama-3-8B-Instruct-Coder-v2
  - model: aaditya/Llama3-OpenBioLLM-8B
merge_method: model_stock
base_model: NousResearch/Meta-Llama-3-8B-Instruct
dtype: bfloat16
```