Datasets:
huseinzol05
commited on
Commit
•
398b688
1
Parent(s):
85ea095
Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,20 @@ language:
|
|
5 |
|
6 |
# HuggingFaceFW/fineweb filter Malaysian context
|
7 |
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
-
1. we filter rows using `{'malay', 'malaysia', 'melayu', 'bursa', 'ringgit'}` keywords
|
11 |
-
2. using r5.16xlarge EC2 instance to filters
|
|
|
5 |
|
6 |
# HuggingFaceFW/fineweb filter Malaysian context
|
7 |
|
8 |
+
## What is it?
|
9 |
+
|
10 |
+
We filter the original 🍷 FineWeb dataset that consists more than **15T tokens** on simple Malaysian keywords.
|
11 |
+
|
12 |
+
Total tokens for the filtered dataset is 174102784199 tokens, **174B tokens**.
|
13 |
+
|
14 |
+
## How we do it?
|
15 |
+
|
16 |
+
1. We filter rows using `{'malay', 'malaysia', 'melayu', 'bursa', 'ringgit'}` keywords on r5.16xlarge EC2 instance for 7 days.
|
17 |
+
2. We calculate total tokens using `tiktoken.encoding_for_model("gpt2")` on c7a.24xlarge EC2 instance for 1 hour.
|
18 |
+
|
19 |
+
source code at https://github.com/mesolitica/malaysian-dataset/tree/master/corpus/fineweb
|
20 |
+
|
21 |
+
## Why we do it?
|
22 |
+
|
23 |
+
So anybody can use this filtered corpus to pretrain, continue pretraining or generate synthetic dataset for their own use cases.
|
24 |
|
|
|
|