File size: 2,423 Bytes
d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 fb2b2df d502fb0 d22c836 d502fb0 d22c836 d502fb0 d22c836 d502fb0 d22c836 d502fb0 d22c836 d502fb0 d22c836 d502fb0 d22c836 d502fb0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
base_model:
- ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- gradientai/Llama-3-8B-Instruct-Gradient-1048k
- ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
- Sao10K/L3-8B-Niitama-v1
- Sao10K/L3-8B-Stheno-v3.3-32K
- tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
- Sao10K/L3-8B-Tamamo-v1
- vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
library_name: transformers
tags:
- mergekit
- merge
---
Attempt number three (five) that aims to fix the overly chatty and flower language of v0.2.
### Quants
[OG Q8 GGUF](https://huggingface.co/kromquant/L3.1-Siithamo-v0.3-8B-GGUFs) by me.
### Details & Recommended Settings
(Still testing; details subject to change)
Rec. Settings:
```
Template: L3
Temperature: 1.3
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256
Dyn Temp: 0.9-1.05 at 0.1
Smooth Sampl: 0.18
```
### Merge Theory
Can't be asked rn.
```yaml
slices:
- sources:
- layer_range: [0, 16]
model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- sources:
- layer_range: [16, 32]
model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
parameters:
int8_mask: true
merge_method: passthrough
dtype: float32
out_dtype: bfloat16
name: formax.ext
---
models:
- model: formax.ext
parameters:
weight: 1
base_model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
parameters:
normalize: false
int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: formaxext.3.1
---
models:
- model: Sao10K/L3-8B-Niitama-v1
parameters:
weight: 0.6
- model: Sao10K/L3-8B-Stheno-v3.3-32K
parameters:
weight: 0.5
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
parameters:
normalize: false
int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: siith.3.1
---
models:
- model: Sao10K/L3-8B-Tamamo-v1
- model: siith.3.1
base_model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
parameters:
normalize: false
int8_mask: true
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
name: siithamol3.1
---
models:
- model: siithamol3.1
parameters:
weight: [0.5, 0.8, 0.9, 1]
density: 0.9
gamma: 0.01
- model: formaxext.3.1
parameters:
weight: [0.5, 0.2, 0.1, 0]
density: 0.9
gamma: 0.01
base_model: siithamol3.1
parameters:
normalize: false
int8_mask: true
merge_method: breadcrumbs
dtype: float32
out_dtype: bfloat16
name: siithamov3
```
|