Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Scoring documents with LLM and making scores available as a quality filter (Ask-LLM)

#3
by Lauler - opened

First of all: great work with this dataset and thanks for releasing it openly!

There have been a few papers published recently that seem to suggest LLMs can function as an effective quality filter for pretraining data. See for example How to Train Data-Efficient LLMs. The blog post accompanying LLama3's release also suggests Meta have been employing a similar strategy:

To ensure Llama 3 is trained on data of the highest quality, we developed a series of data-filtering pipelines. These pipelines include using heuristic filters, NSFW filters, semantic deduplication approaches, and text classifiers to predict data quality. We found that previous generations of Llama are surprisingly good at identifying high-quality data, hence we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3.

A model trained on data with these quality filters seems to be able to reach similar performance to a fully trained model after being trained on only 30%-50% of the tokens.

Do you have any plans or thoughts regarding potentially adding these sort of documents scores to fineweb?

I think it would be realistic, but the cost would likely be around 10000-20000 A100 GPU hours to score half a billion documents, assuming one is using a 7B model optimized for inference, and only using 400-500 words per document as context (truncating if necessary).

HuggingFaceFW org

Hi, from the LLama quote my interpretation is slightly different: I believe they trained small scale text classifiers (like https://fasttext.cc/) and that they used LLama2 to generate the documents that trained these classifiers (you typically do not need a lot of them).
These classifiers were probably not transformers, but rather these small n-gram based models that run cheaply on CPUs, otherwise I really do not see how this approach could be scalable.

HuggingFaceFW org

We actually ended up doing something similar: https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1

Great work and thanks for sharing the knowledge! The blog post is extremely well written.

Lots of details worth trying to replicate (mainly interested in pretraining for other languages than English).

I hope to see you guys expand to other languages soon as well.

Lauler changed discussion status to closed

What did your prompt template look like for scoring documents with Llama3-70B?

Also, did you truncate at some maximum length of tokens when scoring? (question both for annotating with Llama3-70B and then scoring with corpus with the snowflake-BERT-model)

Sign up or log in to comment