koalazf99 commited on
Commit
b246286
1 Parent(s): 0d9d2d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -23,7 +23,7 @@ tags:
23
  <img src="prox-teaser.png">
24
  </p>
25
 
26
- [ArXiv](http://arxiv.org/abs/xxxx) | [Models](https://huggingface.co/collections/gair-prox/prox-math-models-66e92c3e5d54b27612286eb9) | [Code](https://github.com/GAIR-NLP/ProX)
27
 
28
  Open-Web-Math-Pro is refined from [open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math) using the **ProX** refining framework.
29
  It contains about 5B high quality math related tokens, ready for pre-training.
@@ -35,6 +35,10 @@ Open-Web-Math-Pro is based on open-web-math, which is made available under an OD
35
 
36
  ### Citation
37
  ```
38
- @misc{TBD
 
 
 
 
39
  }
40
  ```
 
23
  <img src="prox-teaser.png">
24
  </p>
25
 
26
+ [ArXiv](https://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/collections/gair-prox/prox-math-models-66e92c3e5d54b27612286eb9) | [Code](https://github.com/GAIR-NLP/ProX)
27
 
28
  Open-Web-Math-Pro is refined from [open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math) using the **ProX** refining framework.
29
  It contains about 5B high quality math related tokens, ready for pre-training.
 
35
 
36
  ### Citation
37
  ```
38
+ @article{zhou2024programming,
39
+ title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
40
+ author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
41
+ journal={arXiv preprint arXiv:2409.17115},
42
+ year={2024}
43
  }
44
  ```