kromeurus's picture
Update README.md
eff7142 verified
|
raw
history blame
2.52 kB
---
base_model:
- ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- gradientai/Llama-3-8B-Instruct-Gradient-1048k
- ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
- Sao10K/L3-8B-Niitama-v1
- Sao10K/L3-8B-Stheno-v3.3-32K
- tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
- Sao10K/L3-8B-Tamamo-v1
- vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
library_name: transformers
tags:
- mergekit
- merge
---
Attempt number three (five) that aims to fix the overly chatty and flower language of v0.2.
Updated version [here](https://huggingface.co/kromeurus/L3.1-Siithamo-v0.4-8B)
### Quants
[A few GGUFs](https://huggingface.co/kromquant/L3.1-Siithamo-v0.3-8B-GGUFs) by me.
### Details & Recommended Settings
Mid, same properties of v0.1 but suffers at long context.
Rec. Settings:
```
Template: L3
Temperature: 1.3
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256
Dyn Temp: 0.9-1.05 at 0.1
Smooth Sampl: 0.18
```
### Merge Theory
Can't be asked rn.
```yaml
slices:
- sources:
- layer_range: [0, 16]
model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- sources:
- layer_range: [16, 32]
model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
parameters:
int8_mask: true
merge_method: passthrough
dtype: float32
out_dtype: bfloat16
name: formax.ext
---
models:
- model: formax.ext
parameters:
weight: 1
base_model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
parameters:
normalize: false
int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: formaxext.3.1
---
models:
- model: Sao10K/L3-8B-Niitama-v1
parameters:
weight: 0.6
- model: Sao10K/L3-8B-Stheno-v3.3-32K
parameters:
weight: 0.5
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
parameters:
normalize: false
int8_mask: true
merge_method: dare_linear
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
name: siith.3.1
---
models:
- model: Sao10K/L3-8B-Tamamo-v1
- model: siith.3.1
base_model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
parameters:
normalize: false
int8_mask: true
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
name: siithamol3.1
---
models:
- model: siithamol3.1
parameters:
weight: [0.5, 0.8, 0.9, 1]
density: 0.9
gamma: 0.01
- model: formaxext.3.1
parameters:
weight: [0.5, 0.2, 0.1, 0]
density: 0.9
gamma: 0.01
base_model: siithamol3.1
parameters:
normalize: false
int8_mask: true
merge_method: breadcrumbs
dtype: float32
out_dtype: bfloat16
name: siithamov3
```