mariosasko
commited on
Commit
•
1f2c48f
1
Parent(s):
e85100d
Update README.md
Browse files
README.md
CHANGED
@@ -725,7 +725,7 @@ viewer: false
|
|
725 |
# Dataset Card for Wikipedia
|
726 |
|
727 |
> [!WARNING]
|
728 |
-
> Dataset
|
729 |
|
730 |
## Table of Contents
|
731 |
- [Dataset Description](#dataset-description)
|
@@ -777,7 +777,10 @@ Then, you can load any subset of Wikipedia per language and per date this way:
|
|
777 |
from datasets import load_dataset
|
778 |
|
779 |
load_dataset("wikipedia", language="sw", date="20220120")
|
780 |
-
```
|
|
|
|
|
|
|
781 |
|
782 |
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
|
783 |
|
|
|
725 |
# Dataset Card for Wikipedia
|
726 |
|
727 |
> [!WARNING]
|
728 |
+
> Dataset `wikipedia` is deprecated and will soon redirect to [`wikimedia/wikipedia`](https://huggingface.co/wikimedia/wikipedia).
|
729 |
|
730 |
## Table of Contents
|
731 |
- [Dataset Description](#dataset-description)
|
|
|
777 |
from datasets import load_dataset
|
778 |
|
779 |
load_dataset("wikipedia", language="sw", date="20220120")
|
780 |
+
```
|
781 |
+
|
782 |
+
> [!TIP]
|
783 |
+
> You can specify `num_proc=` in `load_dataset` to generate the dataset in parallel.
|
784 |
|
785 |
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
|
786 |
|