File size: 1,790 Bytes
0e84ca8 6df8c23 7ef516c 6df8c23 0e84ca8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
base_model: []
tags:
- mergekit
- merge
- mistral
- german
- deutsch
- english
- roleplay
- chatml
language:
- de
- en
---
# merge
This is a experimental merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
#### Why this two models?
DiscoLM German 7B is is up to this date (01/21/2024) by far the best German model and makes far fewer grammatical errors and his German generally sounds good. But it is finetuned on Mistral V0.2 or even V0.1.
Kunoichi DPO v2 7B is a model that is already solid on german, but makes some more grammar errors. This model is trained especially for roleplay.
The ulterior motive was now combining this two models to get a even better German model, especially for German roleplay. A short testing showed already good results.
![Example 1](example.jpg)
The last two AI responses above were 100% correct.
![Example 2](example2.jpg)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: DiscoResearch/DiscoLM_German_7b_v1
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- value: [0.5, 0.9]
dtype: bfloat16
```
This settings are from the model [oshizo/japanese-e5-mistral-7b_slerp](https://huggingface.co/oshizo/japanese-e5-mistral-7b_slerp). |