Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- no-annotation
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
license:
|
9 |
+
- cc-by-sa-3.0
|
10 |
+
- gfdl
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
pretty_name: Team-PIXEL/rendered-wikipedia-english
|
14 |
+
size_categories:
|
15 |
+
- 10M<n<100M
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- masked-auto-encoding
|
20 |
+
- rendered-language-modelling
|
21 |
+
task_ids:
|
22 |
+
- masked-auto-encoding
|
23 |
+
- rendered-language-modeling
|
24 |
+
paperswithcode_id: null
|
25 |
+
---
|
26 |
+
|
27 |
+
# Dataset Card for Team-PIXEL/rendered-wikipedia-english
|
28 |
+
|
29 |
+
|
30 |
+
## Dataset Description
|
31 |
+
|
32 |
+
- **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
|
33 |
+
- **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
|
34 |
+
- **Paper:** [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
|
35 |
+
- **Point of Contact:** [Phillip Rust](mailto:[email protected])
|
36 |
+
- **Size of downloaded dataset files:** 125.66 GB
|
37 |
+
- **Size of the generated dataset:** 125.56 GB
|
38 |
+
- **Total amount of disk used:** 251.22 GB
|
39 |
+
|
40 |
+
### Dataset Summary
|
41 |
+
|
42 |
+
This dataset contains the full English Wikipedia from February 1, 2018, rendered into images of 16x8464 resolution.
|
43 |
+
|
44 |
+
The original text dataset was built from a [Wikipedia dump](https://dumps.wikimedia.org/). Each example in the original *text* dataset contained the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). Each *rendered* example contains a subset of one full article. This rendered English Wikipedia was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott.
|
45 |
+
|
46 |
+
The original Wikipedia text dataset was rendered article-by-article into 11.4M examples containing approximately 2B words in total. The dataset is stored as a collection of 338 parquet files.
|
47 |
+
|
48 |
+
It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_wikipedia.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_wikipedia.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the Wikipedia data have not been rendered accurately.
|
49 |
+
Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch.
|
50 |
+
|
51 |
+
You can load the dataset as follows:
|
52 |
+
|
53 |
+
```python
|
54 |
+
from datasets import load_dataset
|
55 |
+
|
56 |
+
# Download the full dataset to disk
|
57 |
+
load_dataset("Team-PIXEL/rendered-wikipedia-english", split="train")
|
58 |
+
|
59 |
+
# Stream the dataset directly from the hub
|
60 |
+
load_dataset("Team-PIXEL/rendered-wikipedia-english", split="train", streaming=True)
|
61 |
+
```
|
62 |
+
|
63 |
+
## Dataset Structure
|
64 |
+
|
65 |
+
### Data Instances
|
66 |
+
|
67 |
+
- **Size of downloaded dataset files:** 125.66 GB
|
68 |
+
- **Size of the generated dataset:** 125.56 GB
|
69 |
+
- **Total amount of disk used:** 251.22 GB
|
70 |
+
|
71 |
+
An example of 'train' looks as follows.
|
72 |
+
```
|
73 |
+
{
|
74 |
+
"pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16
|
75 |
+
"num_patches": "469"
|
76 |
+
}
|
77 |
+
```
|
78 |
+
|
79 |
+
### Data Fields
|
80 |
+
|
81 |
+
The data fields are the same among all splits.
|
82 |
+
|
83 |
+
- `pixel_values`: an `Image` feature.
|
84 |
+
- `num_patches`: a `Value(dtype="int64")` feature.
|
85 |
+
|
86 |
+
### Data Splits
|
87 |
+
|
88 |
+
|train|
|
89 |
+
|:----|
|
90 |
+
|11446535|
|
91 |
+
|
92 |
+
## Dataset Creation
|
93 |
+
|
94 |
+
### Curation Rationale
|
95 |
+
|
96 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
97 |
+
|
98 |
+
### Source Data
|
99 |
+
|
100 |
+
#### Initial Data Collection and Normalization
|
101 |
+
|
102 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
103 |
+
|
104 |
+
#### Who are the source language producers?
|
105 |
+
|
106 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
107 |
+
|
108 |
+
### Annotations
|
109 |
+
|
110 |
+
#### Annotation process
|
111 |
+
|
112 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
113 |
+
|
114 |
+
#### Who are the annotators?
|
115 |
+
|
116 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
117 |
+
|
118 |
+
### Personal and Sensitive Information
|
119 |
+
|
120 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
121 |
+
|
122 |
+
## Considerations for Using the Data
|
123 |
+
|
124 |
+
### Social Impact of Dataset
|
125 |
+
|
126 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
127 |
+
|
128 |
+
### Discussion of Biases
|
129 |
+
|
130 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
131 |
+
|
132 |
+
### Other Known Limitations
|
133 |
+
|
134 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
135 |
+
|
136 |
+
## Additional Information
|
137 |
+
|
138 |
+
### Dataset Curators
|
139 |
+
|
140 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
141 |
+
|
142 |
+
### Licensing Information
|
143 |
+
|
144 |
+
Most of Wikipedia's text and many of its images are co-licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA) and the GNU Free Documentation License (GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
|
145 |
+
|
146 |
+
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes the text.
|
147 |
+
|
148 |
+
### Citation Information
|
149 |
+
|
150 |
+
```bibtex
|
151 |
+
@article{rust-etal-2022-pixel,
|
152 |
+
title={Language Modelling with Pixels},
|
153 |
+
author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
|
154 |
+
journal={arXiv preprint},
|
155 |
+
year={2022},
|
156 |
+
url={https://arxiv.org/abs/2207.06991}
|
157 |
+
}
|
158 |
+
```
|
159 |
+
|
160 |
+
|
161 |
+
### Contact Person
|
162 |
+
|
163 |
+
This dataset was added by Phillip Rust.
|
164 |
+
|
165 |
+
Github: [@xplip](https://github.com/xplip)
|
166 |
+
|
167 |
+
Twitter: [@rust_phillip](https://twitter.com/rust_phillip)
|