MiquSuperdark-70B-v1
This model is outperformed by MiquSuperdark-70B-v2 - prefer that model over this model in all cases.
MiquSuperdark-70B-v1 is a merge of three of the most popular Miqu-derived models, along with Miqu itself. The goal of the merge is to create an strong, well-rounded chat model that picks up desirable traits from its constituent models without sacrificing intelligence.
This is a DARE Linear merge with the following composition:
- sophosympatheia/Midnight-Miqu-70B-v1.5 at weight 0.4
- NeverSleep/MiquMaid-v3-70B at weight 0.2
- maywell/miqu-evil-dpo at weight 0.2
- 152334H/miqu-1-70b-sf at weight 0.2 (used as base model)
DARE Linear was chosen as the merge method based on this HF discussion, in which the creator of Midnight-Miqu says "in my own testing I consistently got the best results from using a dare_linear merge when working with miqu models".
Prompt format
The model responds well to general-purpose prompt formats such as Alpaca. Alternatively, I suggest trying the following format, replacing {the placeholder text}
with your actual messages, without curly brackets.
<message from="system">{your system prompt here}</message><message from="user">{user prompt here}</message><message from="bot">{bot response here}</message><message from="user">{user prompt here}</message><message from="bot">{bot response here}</message> [... and so on ...]
This format is readily understood by the model, and leads to the expected high-quality responses. Note the lack of newlines \n
- they are not necessary and might actually make it harder for the model to follow along.
Merge Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_linear
base_model: /home/dylan/Documents/AI/merge/miqu-1-70b-sf
models:
- model: /media/dylan/SanDisk/LLMs/Midnight-Miqu-70B-v1.5
parameters:
weight: 0.4
- model: /home/dylan/Documents/AI/merge/miqu-1-70b-sf
parameters:
weight: 0.2
- model: /media/dylan/SanDisk/LLMs/miqu-evil-dpo/
parameters:
weight: 0.2
- model: /home/dylan/Documents/AI/merge/MiquMaid-v3-70B
parameters:
weight: 0.2
dtype: float16
tokenizer_source: model:/home/dylan/Documents/AI/merge/miqu-1-70b-sf
The tokenizer is copied from the base model 152334H/miqu-1-70b-sf.
This model is outperformed by MiquSuperdark-70B-v2 - prefer that model over this model in all cases.
- Downloads last month
- 17