Update README.md
Browse files
README.md
CHANGED
@@ -54,7 +54,7 @@ If you want anything that's not here or another model, feel free to request.
|
|
54 |
kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
55 |
I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently.
|
56 |
|
57 |
-
I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format
|
58 |
## Merge Details
|
59 |
### Merge Method
|
60 |
|
|
|
54 |
kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
55 |
I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently.
|
56 |
|
57 |
+
I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format prompts.
|
58 |
## Merge Details
|
59 |
### Merge Method
|
60 |
|