File size: 3,716 Bytes
c41b1de
 
 
 
 
 
 
6b49cd3
18c8944
 
c41b1de
 
6b49cd3
c41b1de
 
0e53f4d
 
 
 
c3d006c
0e53f4d
86a6a95
 
c3d006c
86a6a95
 
 
 
0e53f4d
 
 
 
 
a7ab97c
42ec1ac
a7ab97c
b765941
3008ccf
cb19698
0e53f4d
f81fc4d
33cdd67
0e53f4d
6b49cd3
 
 
0e53f4d
 
6b49cd3
46079a2
6b49cd3
0e53f4d
6b49cd3
 
 
 
 
 
 
 
 
0e53f4d
6b49cd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0e53f4d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
base_model: v000000/SwallowMaid-8B-L3-SPPO-abliterated
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- llama
- japanese
- english
---

This model was converted to GGUF format from [`v000000/SwallowMaid-8B-L3-SPPO-abliterated`](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) using llama.cpp
Refer to the [original model card](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) for more details on the model.

<!DOCTYPE html>
<style>

h1 {
  color: #900C3F;  /* Red color */
  font-size: 1.25em; /* Larger font size */
  text-align: left; /* Center alignment */
  text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); /* Shadow effect */
  background: linear-gradient(90deg, #900C3F, #fba8a8); /* Gradient background */
  -webkit-background-clip: text; /* Clipping the background to text */
  -webkit-text-fill-color: transparent; /* Making the text transparent */
}

</style>
<html lang="en">
<head>
</head>
<body>
General Instruct, RP, Q&A and Storywriting.

-------------------------------------------
<h1>SwallowMaid-8B-Llama-3-SPPO-abliterated</h1>

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/lPwRHeL2qVjLgjnTH-Cvl.png)

"Llama-3-Instruct-8B-SPPO-Iter3" fully uncensored with 35% RP-Mix infused vector direction to gain some roleplay capabilities and prose while attempting to preserve the qualities of Meta's Llama-3-Instruct finetune.

# <h1>merge</h1>

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

# <h1>Merge Details</h1>
# <h1>Merge Method</h1>

This model was merged using a multi-step merge method.

# <h1>Models Merged</h1>

The following models were included in the merge:
* [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
* [UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3)
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [maldv/llama-3-fantasy-writer-8b](https://huggingface.co/maldv/llama-3-fantasy-writer-8b)
* [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1)
* [Nitral-AI/Hathor_Respawn-L3-8B-v0.8](https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8)

# <h1>Configuration</h1>

The following YAML configuration was used to produce this model:

```yaml
# Part 3, Apply abliteration (SwallowMaid-8B)
models:
  - model: sppo-rpmix-part2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
    parameters:
      weight: 1.0
merge_method: linear
dtype: float32

# Part 2, infuse 35% swallow+rpmix to SPPO-Iter3 (sppo-rpmix-part2)
models:
  - model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
    parameters:
      weight: 1.0
  - model: rpmix-part1
    parameters:
      weight: 0.35
merge_method: task_arithmetic
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
parameters:
    normalize: false
dtype: float32

# Part 1, linear merge rpmix (rpmix-part1)
models:
  - model: Nitral-AI/Hathor_Respawn-L3-8B-v0.8
    parameters:
      weight: 0.6
  - model: maldv/llama-3-fantasy-writer-8b
    parameters:
      weight: 0.1
  - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      weight: 0.4
  - model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
    parameters:
      weight: 0.15
merge_method: linear
dtype: float32
```

# <h1>Prompt Template:</h1>
```bash
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

```

</body>
</html>