File size: 1,725 Bytes
a25e394
 
 
 
 
 
b04a765
 
a25e394
 
f57bfcb
07377b6
 
a25e394
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07377b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- Model stock
---
# Merge_XL_model_Stock
Ofcourse the model is still fully focussed on uncensored long context Roleplay and Story.
By far the best itteration.

This model switches to the Smaug instruct 32K for the base bodel.
Expanded with Giraffe and Gradient to keep a robuust long context window.
Higgs and cat for most of the story and RP aspects.
Hermes and Chinese chat are for overall intelligence and understanding.

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using \Smaug-Llama-3-70B-Instruct-32K as a base.

### Models Merged

The following models were included in the merge:
* \Llama-3-Giraffe-70B-Instruct
* \Llama-3-70B-Instruct-Gradient-262k
* \Hermes-2-Theta-Llama-3-70B
* \Higgs-Llama-3-70B
* \Llama3-70B-Chinese-Chat
* \Meta-LLama-3-Cat-A-LLama-70b

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
    - model: \Smaug-Llama-3-70B-Instruct-32K
    - model: \Llama-3-70B-Instruct-Gradient-262k
    - model: \Llama-3-Giraffe-70B-Instruct
    - model: \Higgs-Llama-3-70B
    - model: \Llama3-70B-Chinese-Chat
    - model: \Meta-LLama-3-Cat-A-LLama-70b
    - model: \Hermes-2-Theta-Llama-3-70B
merge_method: model_stock
base_model: \Smaug-Llama-3-70B-Instruct-32K
dtype: bfloat16
```

Any suggestions are very welcome
My personal sampling settings are:
    "temp": 1,
    "temperature_last": true,
    "top_p": 1,
    "top_k": 0,
    "top_a": 0,
    "tfs": 1,
    "typical_p": 1,
    "min_p": 0.05,
    "rep_pen": 1.05,
    "rep_pen_range": 4096,
    "rep_pen_decay": 0,
    "rep_pen_slope": 1,