|
--- |
|
license: llama2 |
|
--- |
|
|
|
|
|
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions |
|
|
|
|
|
|
|
<p align="center"> |
|
π€ <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> β’ π¦ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> β’ π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β’ π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> β’ π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> |
|
</p> |
|
<p align="center"> |
|
π Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> |
|
</p> |
|
|
|
## Unofficial Video Introductions |
|
Thanks to the enthusiastic friends, their video introductions are more lively and interesting. |
|
1. [NEW WizardLM 70b π₯ Giant Model...Insane Performance](https://www.youtube.com/watch?v=WdpiIXrO4_o) |
|
2. [GET WizardLM NOW! 7B LLM KING That Can Beat ChatGPT! I'm IMPRESSED!](https://www.youtube.com/watch?v=SaJ8wyKMBds) |
|
3. [WizardLM: Enhancing Large Language Models to Follow Complex Instructions](https://www.youtube.com/watch?v=I6sER-qivYk) |
|
4. [WizardCoder AI Is The NEW ChatGPT's Coding TWIN!](https://www.youtube.com/watch?v=XjsyHrmd3Xo) |
|
|
|
|
|
|
|
## News |
|
|
|
- π₯ π₯ π₯ [08/11/2023] We release **WizardMath** Models. |
|
- π₯ Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**. |
|
- π₯ Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM. |
|
- π₯ Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM. |
|
|
|
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| |
|
| ----- |------| ---- |------|-------| ----- | ----- | |
|
| WizardMath-70B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | πComing Soon| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a> | |
|
| WizardMath-13B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | πComing Soon| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a> | |
|
| WizardMath-7B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | πComing Soon| **54.9** | **10.7** |[Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a>| |
|
|
|
|
|
<font size=4> |
|
|
|
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| |
|
| ----- |------| ---- |------|-------| ----- | ----- | ----- | |
|
| <sup>**WizardLM-70B-V1.0**</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>π**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | |
|
| <sup>WizardLM-13B-V1.2</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | |
|
| <sup>WizardLM-13B-V1.1</sup> |<sup> π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| |
|
| <sup>WizardLM-30B-V1.0</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | |
|
| <sup>WizardLM-13B-V1.0</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| |
|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| |
|
| <sup>WizardCoder-15B-V1.0</sup> | <sup> π€ <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | | ||<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> | |
|
</font> |
|
|
|
- π₯π₯π₯ [08/09/2023] We released **WizardLM-70B-V1.0** model. |
|
|
|
**Github Repo**: https://github.com/nlpxucan/WizardLM |
|
|
|
**Twitter**: https://twitter.com/WizardLM_AI/status/1689270108747976704 |
|
|
|
**Discord**: https://discord.gg/bpmeZD7V |
|
|
|
|
|
|
|
β<b>Note for model system prompts usage:</b> |
|
|
|
|
|
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: |
|
|
|
``` |
|
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am WizardLM.</s>...... |
|
``` |
|
|
|
β<b>To commen concern about dataset:</b> |
|
|
|
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models. |
|
|
|
|
|
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team . |
|
|
|
Our researchers have no authority to publicly release them without authorization. |
|
|
|
Thank you for your understanding. |
|
|