Masterjp123 commited on
Commit
1ee77f5
1 Parent(s): b241e04

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -2
README.md CHANGED
@@ -4,19 +4,35 @@ base_model:
4
  - IkariDev/Athena-v4
5
  - TheBloke/Llama-2-13B-fp16
6
  - KoboldAI/LLaMA2-13B-Psyfighter2
 
 
 
7
  tags:
8
  - mergekit
9
  - merge
10
-
 
 
 
 
 
 
 
11
  ---
12
  # merged
13
 
14
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
 
16
  ## Merge Details
 
 
 
 
 
 
17
  ### Merge Method
18
 
19
- This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
20
 
21
  ### Models Merged
22
 
@@ -24,11 +40,48 @@ The following models were included in the merge:
24
  * [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
25
  * [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4)
26
  * [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
 
 
 
27
 
28
  ### Configuration
29
 
30
  The following YAML configuration was used to produce this model:
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ```yaml
33
  base_model:
34
  model:
@@ -60,3 +113,5 @@ slices:
60
  parameters:
61
  weight: 0.33
62
  ```
 
 
 
4
  - IkariDev/Athena-v4
5
  - TheBloke/Llama-2-13B-fp16
6
  - KoboldAI/LLaMA2-13B-Psyfighter2
7
+ - KoboldAI/LLaMA2-13B-Erebus-v3
8
+ - Henk717/echidna-tiefigther-25
9
+ - Undi95/Unholy-v2-13B
10
  tags:
11
  - mergekit
12
  - merge
13
+ - not-for-all-audiences
14
+ - ERP
15
+ - RP
16
+ - Roleplay
17
+ - uncensored
18
+ license: llama2
19
+ language:
20
+ - en
21
  ---
22
  # merged
23
 
24
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
25
 
26
  ## Merge Details
27
+ just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.
28
+
29
+ These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit it
30
+
31
+
32
+
33
  ### Merge Method
34
 
35
+ This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
36
 
37
  ### Models Merged
38
 
 
40
  * [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
41
  * [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4)
42
  * [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
43
+ * [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
44
+ * [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25)
45
+ * [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B)
46
 
47
  ### Configuration
48
 
49
  The following YAML configuration was used to produce this model:
50
 
51
+ for P1
52
+ ```yaml
53
+ base_model:
54
+ model:
55
+ path: TheBloke/Llama-2-13B-fp16
56
+ dtype: bfloat16
57
+ merge_method: task_arithmetic
58
+ slices:
59
+ - sources:
60
+ - layer_range: [0, 40]
61
+ model:
62
+ model:
63
+ path: TheBloke/Llama-2-13B-fp16
64
+ - layer_range: [0, 40]
65
+ model:
66
+ model:
67
+ path: Undi95/Unholy-v2-13B
68
+ parameters:
69
+ weight: 1.0
70
+ - layer_range: [0, 40]
71
+ model:
72
+ model:
73
+ path: Henk717/echidna-tiefigther-25
74
+ parameters:
75
+ weight: 0.45
76
+ - layer_range: [0, 40]
77
+ model:
78
+ model:
79
+ path: KoboldAI/LLaMA2-13B-Erebus-v3
80
+ parameters:
81
+ weight: 0.33
82
+ ```
83
+
84
+ for P2
85
  ```yaml
86
  base_model:
87
  model:
 
113
  parameters:
114
  weight: 0.33
115
  ```
116
+
117
+ For the final merge