File size: 6,553 Bytes
22c4b8f
817d675
 
22c4b8f
 
 
7a7bf4c
 
 
22c4b8f
7a7bf4c
22c4b8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41c2edb
 
22c4b8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
language:
- en
license: cc-by-4.0
---
## DCLM-baseline 
***Note: this is an identical copy of https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0, where all the files have been mapped to a parquet format.***


DCLM-baseline is a 4T token / 3B document pretraining dataset that achieves strong performance on language model benchmarks.


Below are comparisions of model trained on DCLM-baseline with other models in the 7B regime.

| Model         | Params | Tokens | Open dataset? | CORE     | MMLU     | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** |        |        |               |          |          |          |
| Llama2        | 7B     | 2T     | βœ—             | 49.2     | 45.8     | 34.1     |
| DeepSeek      | 7B     | 2T     | βœ—             | 50.7     | 48.5     | 35.3     |
| Mistral-0.3   | 7B     | ?      | βœ—             | 57.0     | 62.7     | 45.1     |
| QWEN-2        | 7B     | ?      | βœ—             | 57.5     | **71.9** | 50.5     |
| Llama3        | 8B     | 15T    | βœ—             | 57.6     | 66.2     | 46.3     |
| Gemma         | 8B     | 6T     | βœ—             | 57.8     | 64.3     | 44.6     |
| Phi-3         | 7B     | ?      | βœ—             | **61.0** | 69.9     | **57.9** |
| **Open weights, open datasets** |        |        |               |          |          |          |
| Falcon        | 7B     | 1T     | βœ“             | 44.1     | 27.4     | 25.1     |
| Amber         | 7B     | 1.2T   | βœ“             | 39.8     | 27.9     | 22.3     |
| Crystal       | 7B     | 1.2T   | βœ“             | 48.0     | 48.2     | 33.2     |
| OLMo-1.7      | 7B     | 2.1T   | βœ“             | 47.0     | 54.0     | 34.2     |
| MAP-Neo       | 7B     | 4.5T   | βœ“             | **50.2** | **57.1** | **40.4** |
| **Models we trained** |        |        |               |          |          |          |
| FineWeb edu   | 7B     | 0.14T  | βœ“             | 38.7     | 26.3     | 22.1     |
| FineWeb edu   | 7B     | 0.28T  | βœ“             | 41.9     | 37.3     | 24.5     |
| **DCLM-BASELINE** | 7B     | 0.14T  | βœ“             | 44.1     | 38.3     | 25.0     |
| **DCLM-BASELINE** | 7B     | 0.28T  | βœ“             | 48.9     | 50.8     | 31.8     |
| **DCLM-BASELINE** | 7B     | 2.6T   | βœ“             | **57.1** | **63.7** | **45.4** |


## Dataset Details
### Dataset Description
- **Curated by:** The DCLM Team
- **Language(s) (NLP):** English
- **License:**  CC-by-4.0
### Dataset Sources
- **Repository:** https://datacomp.ai/dclm
- **Paper:**: https://arxiv.org/abs/2406.11794
- **Construction Code**: https://github.com/mlfoundations/dclm



## Uses
### Direct Use
DCLM-Baseline is intended to be used as a research baseline for the DCLM benchmark. It demonstrates the importance of data curation in training performant language models. 
### Out-of-Scope Use
DCLM-Baseline is not intended for training production-ready models or for specific domains such as code and math. It may not perform as well as domain-specific datasets for these tasks. Due to these limitations, the dataset is intended for research use only.
DCLM-Baseline is a subset of the DCLM-Pool, which is a corpus of 240 trillion tokens derived from Common Crawl. The dataset is in plain text format.
## Dataset Creation
### Curation Rationale
DCLM-Baseline was created to demonstrate the effectiveness of the DCLM testbed in developing high-quality training sets for language models. It serves as a proof of concept for the data curation strategies enabled by DCLM and is designed to be a research baseline for the benchmark.
### Source Data
#### Data Collection and Processing
DCLM-Baseline was created by applying a series of cleaning, filtering, and deduplication steps to the raw Common Crawl data (DCLM-Pool). The key steps include:
1. Heuristic cleaning and filtering (reproduction of RefinedWeb)
2. Deduplication using a Bloom filter
3. Model-based filtering using a fastText classifier trained on instruction-formatted data (OpenHermes 2.5 and r/ExplainLikeImFive)
#### Who are the source data producers?
The source data is from Common Crawl, which is a repository of web crawl data.
### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
The dataset may contain biases present in the Common Crawl data. The dataset's performance on code and math tasks is limited compared to its performance on language understanding tasks. DCLM-Baseline is designed for research purposes only.
### Recommendations
Users should be aware of the potential biases and limitations of the dataset, especially when using it for specific domains like code and math. The dataset should only be used for research purposes in the context of the DCLM benchmark.
## Citation

```bibtex
@misc{li2024datacomplm,
      title={DataComp-LM: In search of the next generation of training sets for language models}, 
      author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
      year={2024},
      eprint={2406.11794},
      archivePrefix={arXiv},
      primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}

```