File size: 3,096 Bytes
63bab9c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: odc-by
---
# TL;DR
[Fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) is a popular and high quality open dataset. This dataset is a deduplicated version of Fineweb - removing rows with duplicate text, collecting counts.
## Motivation
Fineweb is an open text dataset intended for training language models. It's one of the highest quality and most popular open datasets available. It has been produced by a reputable AI lab - HuggingFace and has been downloaded tens of thousands of times.
Fineweb dataset is 93.4 TB and has 15T tokens. This makes it one of the 10 biggest open text datasets available, which presents the challenge when working with this dataset. It's hard and expensive to download and process this dataset given the volume.
70% of fineweb is duplicated. Running exact deduplication across all CC crawl reduces the size of dataset from 15T to 5T. The dataset of such reduced size is much cheaper and easier to work with.
This dataset provides an opportunity for research on effects of deduplication on massive datasets.
## Existing deduplication
Fineweb was deduplicated within CC dumps, but not across dumps.
HuggingFace reasoning for publishing dataset without exact deduplication across the whole dataset is to provide potentially valuable upsampling of high quality rows. The hypothesis is that if text persists across multiple CC dumps, then it's longer lived on the web and more valuable. This is a very reasonable hypothesis, however this upsampling increases the size of the dataset 3 times.
## Deduplication mechanism
Text columns was tokenized with GPT4-o tokenizer and the tokenized version was used as a column for exact deduplication. There is no deeper meaning behind this approach, we use GPT4-o tokenized version, it make sense to do dedup on tokenized version and there is no reason why dedup on tokenized version should be drastically different from deduplication on plain text.
[Here is](https://huggingface.co/datasets/Salesforce/fineweb_deduplicated/blob/main/top_100_documents_by_accurances.csv) the csv with 100 most common documents in Fineweb and their row counts.
Here is the example of most repeated document in Fineweb (17049 occurrences):
> Skip to main content Genealogy and Family History Records for Newspaper Archives (1690 – 2016) Newspaper Articles: Includes additional obituaries, births, marriages, and more > Historical Obituaries > Birth Records > Marriage Records > Passenger Lists > More Results – Other Newspaper Archives Records > Recent Newspaper Obituaries (1977 – Today) Government Publications (1789 – 1994) Find military records, widow's claims, orphan petitions, land grants and much more! Historical Books (1749 – 1900) Printed items including: family genealogies, local histories, funeral sermons, biographies, and much more. Social Security Death Index (1937 – 2014) GET UNLIMITED ACCESS: Sign up for a 30-day trial to get unlimited access to our archives. Start a 30-Day Trial As seen on: The Wall Street Journal The Huffington Post Terms of Service Share this page: |