File size: 2,836 Bytes
50933ce
b241e04
 
 
 
 
1ee77f5
 
 
b241e04
 
 
1ee77f5
 
 
 
 
 
 
 
50933ce
b241e04
 
 
 
 
1ee77f5
 
 
 
 
 
b241e04
 
1ee77f5
b241e04
 
 
 
 
 
 
1ee77f5
 
 
b241e04
 
 
 
 
1ee77f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b241e04
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ee77f5
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
base_model:
- Riiid/sheep-duck-llama-2-13b
- IkariDev/Athena-v4
- TheBloke/Llama-2-13B-fp16
- KoboldAI/LLaMA2-13B-Psyfighter2
- KoboldAI/LLaMA2-13B-Erebus-v3
- Henk717/echidna-tiefigther-25
- Undi95/Unholy-v2-13B
tags:
- mergekit
- merge
- not-for-all-audiences
- ERP
- RP
- Roleplay
- uncensored
license: llama2
language:
- en
---
# merged

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.

These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit it



### Merge Method

This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.

### Models Merged

The following models were included in the merge:
* [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
* [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4)
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
* [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
* [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25)
* [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B)

### Configuration

The following YAML configuration was used to produce this model:

for P1
```yaml
base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
  - layer_range: [0, 40]
    model:
      model:
        path: Undi95/Unholy-v2-13B
    parameters:
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Henk717/echidna-tiefigther-25
    parameters:
      weight: 0.45
  - layer_range: [0, 40]
    model:
      model:
        path: KoboldAI/LLaMA2-13B-Erebus-v3
    parameters:
      weight: 0.33
```

for P2
```yaml
base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
  - layer_range: [0, 40]
    model:
      model:
        path: KoboldAI/LLaMA2-13B-Psyfighter2
    parameters:
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Riiid/sheep-duck-llama-2-13b
    parameters:
      weight: 0.45
  - layer_range: [0, 40]
    model:
      model:
        path: IkariDev/Athena-v4
    parameters:
      weight: 0.33
```

For the final merge