File size: 3,909 Bytes
181eb19
 
 
 
 
 
 
 
 
6f43d13
181eb19
 
 
 
b0e9a9f
181eb19
 
 
d4523d3
181eb19
471f058
 
05d3c0f
 
5fc8c1d
181eb19
 
 
b0e9a9f
181eb19
f999cdd
181eb19
b0e9a9f
181eb19
 
 
 
 
 
 
 
 
 
 
 
 
b0e9a9f
181eb19
 
b0e9a9f
181eb19
 
 
 
 
8e0048a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
181eb19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f43d13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
base_model:
- kromeurus/L3-Blackened-Sunfall-15B
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- TheDrummer/Llama-3SOME-8B-v2
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/667eea5cdebd46a5ec4dcc3d/HzAhXawzvRnvlmatPrwld.jpeg)

Well, this merge didn't go as expected, at all. Went in trying to make an 8B downscale of [Blackfall Summanus](https://huggingface.co/kromeurus/L3-Blackfall-Summanus-v0.1-15B) and a comical amount of dumb mistakes later, managed to make this surprisingly solid merge. 
I don't know either, I'm still processing how this model exists because I fat-fingered my keyboard. Anyways, here is Sammanus Ara. Please look at the original model card for more details.

### Quants

[OG Q8_0 GGUF](https://huggingface.co/kromeurus/L3-8.9B-Blackfall-SummanusAra-v0.1-Q8-GGUF) by me.

[GGUFs](https://huggingface.co/backyardai/L3-8.9B-Blackfall-SummanusAra-v0.1-GGUF) by [BackyardAI](https://huggingface.co/backyardai)

[GGUFs](https://huggingface.co/mradermacher/L3-8.9B-Blackfall-SummanusAra-v0.1-GGUF) by [mradermacher](https://huggingface.co/mradermacher)

[imatrix GGUFs](https://huggingface.co/mradermacher/L3-8.9B-Blackfall-SummanusAra-v0.1-i1-GGUF) by [mradermacher](https://huggingface.co/mradermacher)

### Details & Recommended Settings

Compared to the OG 15B version, BF Summanus Ara is surprisingly capable for it's size while keeping most of the original attributes. Obviously, won't be as verbose or nuanced due to natural limitations 
though no less eloquent. A little more precise and coherent, somehow sticks to the example text to a T exactly like Aethora v2 despite not adding it into the merge. Not as chatty as expected 
with the additional models, paces itself quite well unless promted otherwise.

Overall, very close to the OG in all the important aspects. Does amazing in RP and eRP, leaning more narrative driven and story heavy for best results. 

Rec. Settings:
```
Template: Model Default
Temperature: 1.3
Min P: 0.08
Repeat Penelty: 1.05
Repeat Penelty Tokens: 256
```

### Models Merged

* [kromeurus/L3-Blackened-Sunfall-15B](https://huggingface.co/kromeurus/L3-Blackened-Sunfall-15B)
  
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
* [TheDrummer/Llama-3SOME-8B-v2](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2)
* [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2)

I first made passthrough merges of the models listed above into separate parts that has aspects of what I wanted in the final model, then did a ties merge with said parts as seen below.

### Configs

summanus.ds.9b:
```yaml
models:
slices:
- sources:
  - layer_range: [0, 28]
    model: kromeurus/L3-Blackfall-Summanus-v0.1-15B
- sources:
  - layer_range: [56, 64]
    model: kromeurus/L3-Blackfall-Summanus-v0.1-15B
parameters:
  int8_mask: true
merge_method: passthrough
dtype: bfloat16
```
summanusara.atp1:
```yaml
models:
slices:
- sources:
  - layer_range: [0, 8]
    model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
- sources:
  - layer_range: [8, 16]
    model: TheDrummer/Llama-3SOME-8B-v2
- sources:
  - layer_range: [16, 24]
    model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- sources:
  - layer_range: [22, 26]
    model: TheDrummer/Llama-3SOME-8B-v2
- sources:
  - layer_range: [24, 32]
    model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
parameters:
  int8_mask: true
merge_method: passthrough
dtype: bfloat16
```

final: 
```yaml
models:
  - model: parts/summanus.ds.9b
    # No parameters necessary for base model
  - model: parts/summanusara.atp1
    parameters:
      density: [0.33, 0.01, 0.33]
      weight: 0.8
      gamma: 0.001
merge_method: breadcrumbs
base_model: parts/summanus.ds.9b
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
```