Edit model card

Vezora/Mistral-22B-v0.1 AWQ

Model Summary

  • Just two days after our release of Mistral-22b-v0.1, we are excited to introduce our handcrafted experimental model, Mistral-22b-v.02. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
  • v0.2 has trained on 8x more data than v0.1!

How to use

GUANACO PROMPT FORMAT YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELOW. Not using this prompt format will lead to sub optimal results.

  • This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
  • "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
Downloads last month
30
Safetensors
Model size
3.32B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for solidrust/Mistral-22B-v0.2-AWQ

Quantized
(3)
this model

Collection including solidrust/Mistral-22B-v0.2-AWQ