File size: 1,530 Bytes
fc63c55 563070c 64bca15 fc63c55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: llama3
---
### Compute for this merge was provided by KoboldAI.
### Important: Because this model is based on Cat-8B-Instruct-V1 it has the stop sequence issues. Make sure to add `</s>` as a stop Sequence in whatever backend or ui you are using. ###
The following models were used in this recipe:
- https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed-ft
- https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed
- https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1
Recipe used:
```
merge_method: passthrough
dtype: bfloat16
vocab_type: bpe
slices:
- sources:
- layer_range: [0, 24]
model: TheSkullery/llama-3-cat-8b-instruct-v1
- sources:
- layer_range: [8, 24]
model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 24]
model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 32]
model: TheSkullery/llama-3-cat-8b-instruct-v1
name: LLaMa-3-Cat-Instruct-Unhealed-15B
---
merge_method: task_arithmetic
dtype: bfloat16
vocab_type: bpe
base_model: elinas/Llama-3-15B-Instruct-zeroed
models:
- model: elinas/Llama-3-15B-Instruct-zeroed-ft
parameters:
weight: 1.0
- model: LLaMa-3-Cat-Instruct-Unhealed-15B
parameters:
weight: 1.0
``` |