Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
koalazf99 commited on
Commit
f67ac53
1 Parent(s): b4663e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -15,7 +15,7 @@ size_categories:
15
  <img src="prox-teaser.png">
16
  </p>
17
 
18
- [ArXiv](http://arxiv.org/abs/xxxx) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX)
19
 
20
  c4 is refined from [c4](https://huggingface.co/datasets/allenai/c4) using the **ProX** refining framework.
21
  It contains about 40B high quality tokens, ready for general language model pre-training.
@@ -27,6 +27,10 @@ c4 is based on c4, which is made available under an ODC-By 1.0 license; users sh
27
 
28
  ### Citation
29
  ```
30
- @misc{TBD
 
 
 
 
31
  }
32
  ```
 
15
  <img src="prox-teaser.png">
16
  </p>
17
 
18
+ [ArXiv](https://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX)
19
 
20
  c4 is refined from [c4](https://huggingface.co/datasets/allenai/c4) using the **ProX** refining framework.
21
  It contains about 40B high quality tokens, ready for general language model pre-training.
 
27
 
28
  ### Citation
29
  ```
30
+ @article{zhou2024programming,
31
+ title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
32
+ author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
33
+ journal={arXiv preprint arXiv:2409.17115},
34
+ year={2024}
35
  }
36
  ```