File size: 1,965 Bytes
6705e62
d8ea2ff
 
 
 
 
 
 
 
6705e62
d8ea2ff
 
6705e62
d8ea2ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---

base_model:
- mlabonne/NeuralDaredevil-8B-abliterated
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Halu-OAS-8B-Llama3
library_name: transformers
tags:
- mergekit
- merge
license: llama3
license_link: LICENSE
pipeline_tag: text-generation
---

# Llama-3-Oasis-v1-OAS-8B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

Each merge component was already subjected to Orthogonal Activation Steering (OAS) to mitigate refusals. The resulting text completion model should be versatile for both positive and negative roleplay scenarios and storytelling. Care should be taken when using this model.

- mlabonne/NeuralDaredevil-8B-abliterated : high MMLU for reasoning
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS : focus on roleplay 
- Hastagaras/Halu-OAS-8B-Llama3 : focus on storytelling

Built with Meta Llama 3.

## Merge Details
### Merge Method

This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.

### Models Merged

The following models were also included in the merge:
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3)

### Configuration

The following YAML configuration was used to produce this model:

```yaml

base_model: mlabonne/NeuralDaredevil-8B-abliterated

dtype: bfloat16

merge_method: task_arithmetic

slices:

- sources:

  - layer_range: [0, 32]

    model: mlabonne/NeuralDaredevil-8B-abliterated

  - layer_range: [0, 32]

    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS

    parameters:

      weight: 0.3

  - layer_range: [0, 32]

    model: Hastagaras/Halu-OAS-8B-Llama3

    parameters:

      weight: 0.3



```