metadata
license: mit
This is the Full-Weight of WizardLM-13B V1.2 model, this model is trained from Llama-2 13b.
WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
π€ HF Repo β’ π¦ Twitter β’ π [WizardLM] β’ π [WizardCoder]
π Join our Discord
Model | Checkpoint | Paper | MT-Bench | AlpacaEval | WizardEval | HumanEval | License |
---|---|---|---|---|---|---|---|
WizardLM-13B-V1.2 | π€ HF Link | 7.06 | 89.17% | 101.4% | 36.6 pass@1 | Llama 2 License | |
WizardLM-13B-V1.1 | π€ HF Link | 6.76 | 86.32% | 99.3% | 25.0 pass@1 | Non-commercial | |
WizardLM-30B-V1.0 | π€ HF Link | 7.01 | 97.8% | 37.8 pass@1 | Non-commercial | ||
WizardLM-13B-V1.0 | π€ HF Link | 6.35 | 75.31% | 89.1% | 24.0 pass@1 | Non-commercial | |
WizardLM-7B-V1.0 | π€ HF Link | π [WizardLM] | 78.0% | 19.1 pass@1 | Non-commercial | ||
WizardCoder-15B-V1.0 | π€ HF Link | π [WizardCoder] | 57.3 pass@1 | OpenRAIL-M | |||
Repository: https://github.com/nlpxucan/WizardLM
Twitter:
- π₯π₯π₯ [7/25/2023] We released WizardLM V1.2 models. The WizardLM-13B-V1.2 is here (Demo_13B-V1.2, Demo_13B-V1.2_bak-1, Full Model Weight). Please checkout the paper.
- π₯π₯π₯ [7/25/2023] The WizardLM-13B-V1.2 achieves 7.06 on MT-Bench Leaderboard, 89.17% on AlpacaEval Leaderboard, and 101.4% on WizardLM Eval. (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)