Commit
β’
0b1db8d
1
Parent(s):
f11215e
Update README.md (#1)
Browse files- Update README.md (354254d6082f11e6140add3c3b796efc3d8e0b94)
Co-authored-by: Zhibin Gou <[email protected]>
README.md
CHANGED
@@ -18,9 +18,7 @@ Rho-1: Not All Tokens Are What You Need
|
|
18 |
<a href="https://arxiv.org/abs/2404.07965"><b>[π Arxiv]</b></a> β’
|
19 |
<a href="https://huggingface.co/papers/2404.07965"><b>[π¬ HF Paper]</b></a> β’
|
20 |
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[π€ Models]</b></a> β’
|
21 |
-
<a href="https://github.com/microsoft/rho"><b>[π± GitHub]</b></a>
|
22 |
-
<a href="https://twitter.com/zebgou/status/1778676535404396697"><b>[π¦ Twitter]</b></a> β’
|
23 |
-
<a href="https://huggingface.co/spaces/zubingou/rho-1"><b>[π€ Gradio Demo]</b></a>
|
24 |
</p>
|
25 |
|
26 |
<p align="center">
|
@@ -32,7 +30,6 @@ Rho-1: Not All Tokens Are What You Need
|
|
32 |
|
33 |
## π₯ News
|
34 |
|
35 |
-
- [2024/04/14] πππ We release [Gradio demo of Rho-1 Code Interpreter](https://huggingface.co/spaces/zubingou/rho-1), try it out!
|
36 |
- [2024/04/12] π₯π₯π₯ Rho-Math-v0.1 models released at π€ HuggingFace!
|
37 |
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively β matching DeepSeekMath with only 3\% of the pretraining tokens.
|
38 |
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
|
|
|
18 |
<a href="https://arxiv.org/abs/2404.07965"><b>[π Arxiv]</b></a> β’
|
19 |
<a href="https://huggingface.co/papers/2404.07965"><b>[π¬ HF Paper]</b></a> β’
|
20 |
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[π€ Models]</b></a> β’
|
21 |
+
<a href="https://github.com/microsoft/rho"><b>[π± GitHub]</b></a>
|
|
|
|
|
22 |
</p>
|
23 |
|
24 |
<p align="center">
|
|
|
30 |
|
31 |
## π₯ News
|
32 |
|
|
|
33 |
- [2024/04/12] π₯π₯π₯ Rho-Math-v0.1 models released at π€ HuggingFace!
|
34 |
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively β matching DeepSeekMath with only 3\% of the pretraining tokens.
|
35 |
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
|