File size: 2,275 Bytes
99b851a
 
 
 
 
 
 
 
 
 
a76e107
 
99b851a
82b8571
3460ce9
 
9471f11
 
d7631b6
a76e107
4e731f6
82b8571
7b8d883
 
 
a9ad0bd
 
 
99b851a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bcd51e7
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
base_model:
- v000000/L3-8B-Poppy-Sunspice-experiment-c
- Blackroot/Llama-3-8B-Abomination-LORA
- v000000/L3-8B-Poppy-Sunspice-experiment-c
- ResplendentAI/BlueMoon_Llama3
library_name: transformers
tags:
- mergekit
- merge
- llama
- not-for-all-audiences
---

### Llama-3-8B-Poppy-Moonfall-C

RP model.

Poppy Sunspice with updated models and loras. New gen.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/gnXg-KPqUIuf8vqAOoVgh.png)

# Thanks mradermacher for the quants!
* [GGUF Q2-Q8](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF)

# Quants
* [GGUF Q8_0](https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-C-Q8_0-GGUF)

# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [v000000/L3-8B-Poppy-Sunspice-experiment-c](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice-experiment-c) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA)
* [v000000/L3-8B-Poppy-Sunspice-experiment-c](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice-experiment-c) + [ResplendentAI/BlueMoon_Llama3](https://huggingface.co/ResplendentAI/BlueMoon_Llama3)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: v000000/L3-8B-Poppy-Sunspice-experiment-c+Blackroot/Llama-3-8B-Abomination-LORA
        layer_range: [0, 32]
      - model: v000000/L3-8B-Poppy-Sunspice-experiment-c+ResplendentAI/BlueMoon_Llama3
        layer_range: [0, 32]
merge_method: slerp
base_model: v000000/L3-8B-Poppy-Sunspice-experiment-c+Blackroot/Llama-3-8B-Abomination-LORA
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
random_seed: 0
```

# Prompt Template:
```bash
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

```