|
--- |
|
base_model: |
|
- v000000/L3-8B-Poppy-Sunspice-experiment-c |
|
- Blackroot/Llama-3-8B-Abomination-LORA |
|
- v000000/L3-8B-Poppy-Sunspice-experiment-c |
|
- ResplendentAI/BlueMoon_Llama3 |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- llama |
|
- not-for-all-audiences |
|
- nsfw |
|
--- |
|
|
|
### 🌙Llama-3-8B-Poppy-Moonfall-C |
|
|
|
RP model. |
|
|
|
Poppy Sunspice with updated models, slerps, ties and loras. New gen test. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/gnXg-KPqUIuf8vqAOoVgh.png) |
|
|
|
# Thanks mradermacher for the quants! |
|
* [GGUF Q2-Q8](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF) |
|
|
|
# Quants |
|
* [GGUF Q8_0](https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-C-Q8_0-GGUF) |
|
|
|
Update/Notice: This one is kinda prone to endless generations, which is sad. OG isn't. |
|
|
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the SLERP merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [v000000/L3-8B-Poppy-Sunspice-experiment-c](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice-experiment-c) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA) |
|
* [v000000/L3-8B-Poppy-Sunspice-experiment-c](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice-experiment-c) + [ResplendentAI/BlueMoon_Llama3](https://huggingface.co/ResplendentAI/BlueMoon_Llama3) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: v000000/L3-8B-Poppy-Sunspice-experiment-c+Blackroot/Llama-3-8B-Abomination-LORA |
|
layer_range: [0, 32] |
|
- model: v000000/L3-8B-Poppy-Sunspice-experiment-c+ResplendentAI/BlueMoon_Llama3 |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: v000000/L3-8B-Poppy-Sunspice-experiment-c+Blackroot/Llama-3-8B-Abomination-LORA |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
random_seed: 0 |
|
``` |
|
|
|
# Prompt Template: |
|
```bash |
|
<|begin_of_text|><|start_header_id|>system<|end_header_id|> |
|
|
|
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> |
|
|
|
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
|
|
{output}<|eot_id|> |
|
|
|
``` |