File size: 1,868 Bytes
b5e8dfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d0f1ff3
b5e8dfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: cc-by-nc-4.0
tags:
- merge
---

Just 11B RP experemental merge. For some time I wanted to do something similar. 

## Model used
- [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
- [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
- [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
- [KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)
- [KoboldAI/Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)

## Prompt template

Just use alpaca. Or you can try chatml, But I only tested this with alpaca and it works fine.

## The secret sauce

lemolake-11B :
```
slices:
  - sources:
    - model: KatyTheCutie/LemonadeRP-4.5.3
      layer_range: [0, 24]
  - sources:
    - model: senseable/WestLake-7B-v2
      layer_range: [8, 32]

merge_method: passthrough
dtype: bfloat16
```

holobus-11B :
```
slices:
  - sources:
    - model: KoboldAI/Mistral-7B-Holodeck-1
      layer_range: [0, 24]
  - sources:
    - model: KoboldAI/Mistral-7B-Erebus-v3
      layer_range: [8, 32]

merge_method: passthrough
dtype: bfloat16
```

Vertilake-11B :
```
base_model: "Mistral-11B-v0.1"
models:
  - model: "Mistral-11B-v0.1"
    # no parameters necessary for base model
  - model: "Fimbulvetr-11B-v2"
    parameters:
      weight: 0.43
      density: 0.8
  - model: "lemolake-11b"
    parameters:
      weight: 0.6
      density: 0.8
  - model: "Holobus-11B"
    parameters:
      weight: 0.17
      density: 0.5
merge_method: dare_ties
parameters:
  int8_mask: true
dtype: bfloat16

```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.

Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe.