e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
WizardLM-2-8x22B-exl2-rpcal
Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.
Branches:
main
--measurement.json
4.5b6h
-- 4.5bpw, 6bit lm_head4b6h
-- 4bpw, 6bit lm_head3.5b6h
-- 3.5bpw, 6bit lm_head2.5b6h
-- 2.5bpw, 6bit lm_head
Original model link: (reuploaded, original source got taken down) alpindale/WizardLM-2-8x22B
Quanter's notes
I like this. On the main
-branch, I added a few of the various settings I use in ST. I tend to mix and match these, so try them all to see which works best for you and your cards.
Original model README below.
π€ HF Repo β’π± Github Repo β’ π¦ Twitter β’ π [WizardLM] β’ π [WizardCoder] β’ π [WizardMath]
π Join our Discord
See here for the WizardLM-2-7B re-upload.
News π₯π₯π₯ [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
Model Details
- Model name: WizardLM-2 8x22B
- Developed by: WizardLM@Microsoft AI
- Model type: Mixture of Experts (MoE)
- Base model: mistral-community/Mixtral-8x22B-v0.1
- Parameters: 141B
- Language(s): Multilingual
- Blog: Introducing WizardLM-2
- Repository: https://github.com/nlpxucan/WizardLM
- Paper: WizardLM-2 (Upcoming)
- License: Apache2.0
Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
Usage
βNote for model system prompts usage:
WizardLM-2 adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following:
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
Inference WizardLM-2 Demo Script
We provide a WizardLM-2 inference demo code on our github.