huseinzol05's picture
Update README.md
398b688 verified
|
raw
history blame
783 Bytes
metadata
language:
  - en

HuggingFaceFW/fineweb filter Malaysian context

What is it?

We filter the original 🍷 FineWeb dataset that consists more than 15T tokens on simple Malaysian keywords.

Total tokens for the filtered dataset is 174102784199 tokens, 174B tokens.

How we do it?

  1. We filter rows using {'malay', 'malaysia', 'melayu', 'bursa', 'ringgit'} keywords on r5.16xlarge EC2 instance for 7 days.
  2. We calculate total tokens using tiktoken.encoding_for_model("gpt2") on c7a.24xlarge EC2 instance for 1 hour.

source code at https://github.com/mesolitica/malaysian-dataset/tree/master/corpus/fineweb

Why we do it?

So anybody can use this filtered corpus to pretrain, continue pretraining or generate synthetic dataset for their own use cases.