pretty_name: C4
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- odc-by
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: c4
C4, T5 tokenized, in ragged array format
Processed distribution of Google's C4 dataset: a colossal, cleaned version of Common Crawl's web crawl corpus.
Uses the text data from allenai/c4
.
Includes en
subset only.
T5 tokenizer was applied to the text.
Distributed as a ragged array.
Converted via json_to_ragged.py
.
Download size of all shards:
Split | Data+Lengths Size | Divided across n Shards |
Typical shard size: data.npy |
Typical shard size: len.npy |
---|---|---|---|---|
Train | 293G | 1024 | 344M | 1.4M |
Test | 299M | 8 | 44M | 179K |
Total | 296G | N/A | N/A | N/A |
The data is uncompressed, in order to preserve support for random-seeking..data.npy
would probably benefit from compression, because token sequences exhibit patterns.
Tokenization achieves a ~44% compression ratio.
Allen AI's original gzipped JSONL text data achieved a ~61% compression ratio.
So tokenized is about 13% bigger.
Download everything via:
pip install hf_transfer huggingface-cli
HF_HUB_ENABLE_HF_TRANSFER=True huggingface-cli download --repo-type dataset --local-dir . --local-dir-use-symlinks False Birchlabs/c4-t5-ragged .
Download a single ragged array to try it out:
huggingface-cli download --repo-type dataset --local-dir . --local-dir-use-symlinks False Birchlabs/c4-t5-ragged en/validation/c4-validation.00000-of-00008.{data,len}.npy
Read ragged arrays like so:
https://github.com/Birch-san/pre-tokenize/blob/main/script/read_ragged.py
The basic idea is:
data.npy
is a very long 1D numpy array of tokens.len.npy
is a 1D numpy array describing how long is each sample in data.npy
.
To read sample 0 from data.npy
, you would:
- start at index 0 in
data.npy
- check sample 0's length (position 0 in
len.npy
) - read from index 0 to index 0 + length-of-sample-0
To read sample 1 from data.npy
, you would:
- start at the end of sample 0.
- check sample 1's length (position 1 in
len.npy
) - read from end-of-sample-0 to end-of-sample-0 + length-of-sample-1
We can obtain an index of sample ending positions by adding each sample length as we go along (lengths.cumsum()).
We can obtain an index of sample starting positions by prepending the aforementioned endings index with a 0.read_ragged.py
demonstrates how to create this index, and use it to achieve random access.
This isn't ready for use in torch DataLoader.
This dataset format is intended as a precursor, from which you could create a dataset in a different format.
For example, you might want to iterate over every sample here, chunking by a fixed context length, and output the samples via .parquet chunks for use with torch DataLoader.
That's an easy way out, but your disk won't thank you if you do fully-random access.
An approach that hits the disk less / requires less RAM, would be to implement an IterableDataset, where you iterate sequentially over shards but shuffle within-shard (or shuffle within a smaller-than-shard buffer).
You might also want to perform analyses over the .len.npy
to decide how to pack these sequences (e.g. packing a 128 and 384 sequence into a 512 context length).
You can do such an analysis via GraphCore's packedBERT.
Then you could process the data into a "packed" dataset.
Source Data
Initial Data Collection and Normalization
The C4 and mC4 datasets are collections text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in c4.py by Tensorflow Datasets.
C4 dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect was discarded.
To build mC4, the authors used CLD3 to identify over 100 languages.
Licensing Information
We are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
Acknowledgements
Big ups to the good folks at Common Crawl whose data made this possible (consider donating!), to Google for creating the code that curates and filters the data, and to Huggingface, who had no issue with hosting these 3TB of data for public download!
Thanks Allen AI for sharing the text that was processed to make this dataset.