Datasets:

Modalities:
Text
Formats:
webdataset
Languages:
English
Libraries:
Datasets
WebDataset
License:
Molbap HF staff commited on
Commit
dd7c28b
1 Parent(s): 017e2d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -11
README.md CHANGED
@@ -90,25 +90,106 @@ def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=
90
  For each pdf document, we store statistics on the file size, number of words (as characters separated by spaces), number of pages, as well as the rendering times of each page for a given dpi.
91
  #### Filtering process
92
 
93
- File size and page rendering time are used to set thresholds in the final dataset: the goal is to remove files that are larger than 100 MB, or that take more than 500ms to render on a modern machine, to optimize dataloading at scale. Having "too large" or "too slow" files would add a burden to large-scale training pipelines and we choose to alleviate this in the current release. Finally, a full pass over the dataset is done, trying to open a bytestream
94
 
95
  We get to 48 million pages kept as valid samples.
96
 
97
  As a last step, we use XLM-Roberta to restrict the dataset to an english subset, specifically `papluca/xlm-roberta-base-language-detection` , on the first 512 words of the first page of each document.
98
  Be aware that some documents may have several languages embedded in them, or that some predictions might be inaccurate.
99
 
100
-
101
  At the end, each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
102
  webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
103
 
104
- ### Dataset statistics
 
 
 
 
 
 
 
 
 
105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
- In this dataset, an additional filtering has been done to restrict documents to the english language to 18.6 million pages over 2.16 million documents. This filtering has been done using XLM
 
 
 
 
 
 
 
 
 
 
 
 
 
108
 
109
- Further, the metadata for each document has been formatted in this way:
110
 
111
- TODO add formatting
 
112
 
113
  Such a formatting follows the multimodal dataset from the Industry Document Library, `https://huggingface.co/datasets/pixparse/IDL-wds`.
114
 
@@ -125,17 +206,17 @@ Such a formatting follows the multimodal dataset from the Industry Document Libr
125
 
126
  Pablo Montalvo, Ross Wightman
127
 
128
- ### Disclaimer
 
 
 
 
129
 
130
- This dataset, as a corpus, does not represent the intent and purpose from CC-MAIN-2021-31-PDF-UNTRUNCATED.
131
- TODO add disclaimer on biases of using that dataset as a faithful representation of existing documents on the web
132
 
133
  ### Licensing Information
134
 
135
  Data has been filtered from the original corpus. As a consequence, users should note [Common Crawl's license and terms of use](https://commoncrawl.org/terms-of-use) and the [Digital Corpora project's Terms of Use](https://digitalcorpora.org/about-digitalcorpora/terms-of-use/).
136
 
137
- ### Citation Information
138
- ??
139
 
140
 
141
 
 
90
  For each pdf document, we store statistics on the file size, number of words (as characters separated by spaces), number of pages, as well as the rendering times of each page for a given dpi.
91
  #### Filtering process
92
 
93
+ File size and page rendering time are used to set thresholds in the final dataset: the goal is to remove files that are larger than 100 MB, or that take more than 500ms to render on a modern machine, to optimize dataloading at scale. Having "too large" or "too slow" files would add a burden to large-scale training pipelines and we choose to alleviate this in the current release. Finally, a full pass over the dataset is done, trying to open and decode a bytestream from each raw object and discarding any object (pair pdf/json) that fails to be opened, to remove corrupted data.
94
 
95
  We get to 48 million pages kept as valid samples.
96
 
97
  As a last step, we use XLM-Roberta to restrict the dataset to an english subset, specifically `papluca/xlm-roberta-base-language-detection` , on the first 512 words of the first page of each document.
98
  Be aware that some documents may have several languages embedded in them, or that some predictions might be inaccurate.
99
 
 
100
  At the end, each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
101
  webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
102
 
103
+ ### Data, metadata and statistics.
104
+
105
+ Pdf files are coming from various sources. They are in RGB format, and contain multiple pages, and they can be rendered using the engine of your choice, here [pypdf](https://github.com/py-pdf/pypdf) .
106
+
107
+ ```python
108
+ from pdf2image import convert_from_bytes
109
+
110
+ pdf_first_page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_page=1)[0]
111
+ ```
112
+
113
 
114
+ The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
115
+ ```json
116
+ {
117
+ "pages": [
118
+ {
119
+ "words": [
120
+ {
121
+ "text": [
122
+ "Health", "Smart", "Virginia", "Sample", "Lesson", "Plan", "Grade", "8", "-", "HP-7"
123
+ ],
124
+ "bbox": [
125
+ [0.117647, 0.045563, 0.051981, 0.015573],
126
+ [0.174694, 0.045563, 0.047954, 0.015573],
127
+ [0.227643, 0.045563, 0.05983, 0.015573],
128
+ [0.292539, 0.045563, 0.061002, 0.015573],
129
+ [0.357839, 0.045563, 0.058053, 0.015573],
130
+ [0.420399, 0.045563, 0.035908, 0.015573],
131
+ [0.716544, 0.04577, 0.054624, 0.016927],
132
+ [0.776681, 0.04577, 0.010905, 0.016927],
133
+ [0.793087, 0.04577, 0.00653, 0.016927],
134
+ [0.805078, 0.04577, 0.044768, 0.016927]
135
+ ],
136
+ "score": [
137
+ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
138
+ ],
139
+ "line_pos": [
140
+ [0, 0], [0, 8], [0, 16], [0, 24], [0, 32], [0, 40], [0, 48], [1, 0], [2, 0], [3, 0]
141
+ ]
142
+ }
143
+ ],
144
+ "lines": [
145
+ {
146
+ "text": [
147
+ "Health Smart Virginia Sample Lesson Plan Grade", "Physical", "Disease", "Health", "2020", "Grade 8 Sample Lesson Plan:"
148
+ ],
149
+ "bbox": [
150
+ [0.117647, 0.045563, 0.653521, 0.016927],
151
+ [0.716546, 0.063952, 0.07323199999999996, 0.016927],
152
+ [0.716546, 0.082134, 0.07102200000000003, 0.016927],
153
+ [0.716546, 0.100315, 0.05683300000000002, 0.016927],
154
+ [0.716546, 0.118497, 0.043709, 0.016927],
155
+ [0.27, 0.201185, 0.459554, 0.028268]
156
+ ],
157
+ "score": [
158
+ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
159
+ ],
160
+ "word_slice": [
161
+ [0, 7], [7, 8], [8, 9], [9, 10], [10, 11], [11, 16]
162
+ ]
163
+ }
164
+ ],
165
+ "images_bbox": [
166
+ [0.37353, 0.090907, 0.253736, 0.100189]
167
+ ],
168
+ "images_bbox_no_text_overlap": [
169
+ [0.37353, 0.090907, 0.253736, 0.100189]
170
+ ]
171
+ }
172
+ ]
173
+ }
174
 
175
+ ```
176
+
177
+ The top-level key, `pages`, is a list of every page in the document. The above example shows only one page.
178
+ `words` is a list of words without spaces, with their individual associated bounding box in the next entry.
179
+ `bbox` contains the bounding box coordinates in `left, top, width, height` format, with coordinates relative to the page size.
180
+ `line_pos`, for words, is a list of tuples indicating the index of the line the word belongs to, then the starting position in that line, character-wise.
181
+
182
+ `lines` are lines (parts of sequences, strings separated by spaces) grouped together using the heuristic detailed above.
183
+ `bbox` contains the bounding box coordinates in `left, top, width, height` format, with coordinates relative to the page size.
184
+
185
+ For each page,
186
+ `images_bbox` gives the bounding boxes of the images embedded in the page.
187
+ `images_bbox_no_text_overlap` gives a reduced list of bounding boxes that have no overlap with text found in the pdf - that does not mean
188
+ ``
189
 
 
190
 
191
+ `score` is a placeholder of value 1.0 for the entire dataset.
192
+
193
 
194
  Such a formatting follows the multimodal dataset from the Industry Document Library, `https://huggingface.co/datasets/pixparse/IDL-wds`.
195
 
 
206
 
207
  Pablo Montalvo, Ross Wightman
208
 
209
+ ### Disclaimer and note to researchers
210
+
211
+ This dataset, as a corpus, does not represent the intent and purpose from CC-MAIN-2021-31-PDF-UNTRUNCATED. The original is made to represent extant pdf data in its diversity and complexity. In particular, common issues related to misuse of pdfs such as mojibake (garbled text due to decoding erros) are yet to be addressed systematically, and this dataset present simplifications that can hide such issues found in the wild. In order to address this biases, we recommend to examine carefully both the simplified annotation and the original `pdf` data, beyond a simple rendering.
212
+
213
+ Further, the annotation is limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely by said annotation.
214
 
 
 
215
 
216
  ### Licensing Information
217
 
218
  Data has been filtered from the original corpus. As a consequence, users should note [Common Crawl's license and terms of use](https://commoncrawl.org/terms-of-use) and the [Digital Corpora project's Terms of Use](https://digitalcorpora.org/about-digitalcorpora/terms-of-use/).
219
 
 
 
220
 
221
 
222