File size: 3,413 Bytes
e2e425d
2d832f6
 
 
 
 
 
 
 
 
 
e2e425d
 
 
 
 
 
2d832f6
46dcda6
 
 
2d832f6
e2e425d
2d832f6
e2e425d
2d832f6
e2e425d
2d832f6
e2e425d
2d832f6
e2e425d
2d832f6
e2e425d
2d832f6
 
 
 
 
 
 
 
 
 
 
 
e2e425d
2d832f6
e2e425d
 
2d832f6
 
 
 
 
 
 
 
 
 
 
 
e2e425d
 
2d832f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2e425d
2d832f6
 
 
 
 
 
 
 
 
 
e2e425d
2d832f6
e2e425d
 
2d832f6
e2e425d
2d832f6
e2e425d
 
2d832f6
e2e425d
2d832f6
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
base_model:
- ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- gradientai/Llama-3-8B-Instruct-Gradient-1048k
- kromcomp/L3-Ceto-Epith-Humanity.A-v0.1-8B
- ghost-x/ghost-8b-beta-1608
- kromcomp/L3-Ceto-Epith-Humanity-v0.1-8B
- tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
- crestf411/L3.1-8B-sunfall-v0.6.1-dpo
- ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
library_name: transformers
tags:
- mergekit
- merge

---
More experiments that actually work LMAO. Started straying away from Siithamo, at least model list wise. Stheno is just so chatty, idk how to tame it yet. Used components from my upcoming fatboy model as parts of this merge and imo, this is a hidden gem.

09.27.2024: Well this model needed way more help then i thought it did. So a month later, I come with a new iteration [Aglow Vulca](https://huggingface.co/kromeurus/L3.1-Aglow-Vulca-v0.1-8B). Use that one, since this has bugs.

### Quants

[OG Q8 GGUF](https://huggingface.co/kromquant/L3.1-Blazed-Vulca-v0.1c-8B-GGUFs) by me.

### Model Details & Recommended Settings

(Still testing; details subject to change)

Follows instructs fairly well, doesn't stray much unless the temp is too high. Same thing as all the other model I make with Formax (ty ArliAI), this merge will reflect the character card quality; shit card will have shit output and vise versa. 

Generates slightly flowery text, thought process type writing. Human-ish dialogue. Chatty but not too chatty, will mimic previous text examples. Coherent up to 16k (as tested).

Rec. Settings:
```
Template: L3
Temperature: 1.3
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256-512 #stick closer to 256
```

### Merge Theory

Will update later, too tired rn.

### Config

```yaml
models: 
  - model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
    parameters:
      weight: [1, 1, 1, 1, 0, 0, 0, 0]
  - model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
    parameters:
      weight: [0, 0, 0, 0, 1, 1, 1, 1]
base_model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
parameters:
  normalize: false
  int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: formaxext.3.1
---
models:
    - model: kromcomp/L3-Ceto-Epith-Humanity.A-v0.1-8B
    - model: ghost-x/ghost-8b-beta-1608
    - model: kromcomp/L3-Ceto-Epith-Humanity-v0.1-8B
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
parameters:
  normalize: false
  int8_mask: true
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: humplus
---
models: 
  - model: humplus
    parameters:
      weight: [0.01, 0.53, 0.9]
  - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
    parameters:
      weight: [0.55, 0.29, 0.1]
  - model: crestf411/L3.1-8B-sunfall-v0.6.1-dpo
    parameters:
      weight: [0.54, 0.28, 0.1]
base_model: humplus
parameters:
  normalize: false
  int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: tusl3.1
---
models: 
  - model: tusl3.1
    parameters:
      weight: [0.5, 0.75, 0.8, 0.9, 0.95]
      density: 0.9
      gamma: 0.01
  - model: formaxext.3.1
    parameters:
      weight: [0.5, 0.25, 0.2, 0.1, 0.05]
      density: 0.9
      gamma: 0.01
base_model: tusl3.1
tokenizer_source: union
parameters:
  normalize: false
  int8_mask: true
merge_method: breadcrumbs_ties
dtype: float32
out_dtype: bfloat16
name: mantusl3.1
```