Edit model card

MiquMaid v2 2x70B

Check out our blogpost about this model series Here! - Join our Discord server Here!

[V2-70B - V2-70B-DPO - V2-2x70B - V2-2x70B-DPO]

This model uses the Alpaca prompting format

Model trained for RP conversation on Miqu-70B with our magic sauce.

Then, we have done a MoE, made of MiquMaid-v2-70B and Miqu-70B base, making the model using the finetune AND the base model for each token, working together.

We have seen a significant improvement, so we decided to share that, even if the model is very big.

Credits:

  • Undi
  • IkariDev

Description

This repo contains FP16 files of MiquMaid-v2-2x70B.

Switch: FP16 - GGUF

Training data used:

Custom format:

### Instruction:
{system prompt}

### Input:
{input}

### Response:
{reply}

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
2
Safetensors
Model size
125B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NeverSleep/MiquMaid-v2-2x70B

Quantizations
2 models

Collection including NeverSleep/MiquMaid-v2-2x70B