Datasets:

Languages:
English
ArXiv:
License:
Thomas Wang commited on
Commit
c5555b0
1 Parent(s): 6f642ec

Add Conceptual 12M (#4162)

Browse files

* Add Conceptual 12M

Co-authored-by: Mario Šaško <[email protected]>

Commit from https://github.com/huggingface/datasets/commit/9c8c8d6cb41d57a79113d7d1f252e0d6160c9edc

Files changed (4) hide show
  1. README.md +237 -0
  2. conceptual_12m.py +77 -0
  3. dataset_infos.json +1 -0
  4. dummy/0.0.0/dummy_data.zip +3 -0
README.md ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10M<n<100M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - image-to-text
18
+ task_ids:
19
+ - image-captioning
20
+ paperswithcode_id: cc12m
21
+ pretty_name: Conceptual 12M
22
+ ---
23
+
24
+ # Dataset Card for Conceptual 12M
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Dataset Preprocessing](#dataset-preprocessing)
30
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-instances)
35
+ - [Data Splits](#data-instances)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Repository:** [Conceptual 12M repository](https://github.com/google-research-datasets/conceptual-12m)
53
+ - **Paper:** [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981)
54
+ - **Point of Contact:** [Conceptual Captions e-mail](mailto:[email protected])
55
+
56
+ ### Dataset Summary
57
+
58
+ Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training.
59
+ Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M).
60
+
61
+ ### Dataset Preprocessing
62
+
63
+ This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
64
+
65
+ ```python
66
+ from concurrent.futures import ThreadPoolExecutor
67
+ from functools import partial
68
+ import io
69
+ import urllib
70
+
71
+ import PIL.Image
72
+
73
+ from datasets import load_dataset
74
+ from datasets.utils.file_utils import get_datasets_user_agent
75
+
76
+
77
+ def fetch_single_image(image_url, timeout=None, retries=0):
78
+ for _ in range(retries + 1):
79
+ try:
80
+ request = urllib.request.Request(
81
+ image_url,
82
+ data=None,
83
+ headers={"user-agent": get_datasets_user_agent()},
84
+ )
85
+ with urllib.request.urlopen(request, timeout=timeout) as req:
86
+ image = PIL.Image.open(io.BytesIO(req.read()))
87
+ break
88
+ except Exception:
89
+ image = None
90
+ return image
91
+
92
+
93
+ def fetch_images(batch, num_threads, timeout=None, retries=0):
94
+ fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
95
+ with ThreadPoolExecutor(max_workers=num_threads) as executor:
96
+ batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
97
+ return batch
98
+
99
+
100
+ num_threads = 20
101
+ dset = load_dataset("conceptual_12m")
102
+ dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
103
+ ```
104
+
105
+ ### Supported Tasks and Leaderboards
106
+
107
+ - `image-captioning`: This dataset can be used to train model for the Image Captioning task.
108
+
109
+ ### Languages
110
+
111
+ All captions are in English.
112
+
113
+ ## Dataset Structure
114
+
115
+ ### Data Instances
116
+
117
+ Each instance represents a single image with a caption:
118
+
119
+ ```
120
+ {
121
+ 'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800',
122
+ 'caption': 'a very typical bus station'
123
+ }
124
+ ```
125
+
126
+ ### Data Fields
127
+
128
+ - `image_url`: Static URL for downloading the image associated with the post.
129
+ - `caption`: Textual description of the image.
130
+
131
+ ### Data Splits
132
+
133
+ There is only training data, with a total of 12423374 rows
134
+
135
+ ## Dataset Creation
136
+
137
+ ### Curation Rationale
138
+
139
+ Conceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps.
140
+
141
+ ### Source Data
142
+
143
+ #### Initial Data Collection and Normalization
144
+
145
+ From the paper:
146
+ > To arrive at CC12M, we keep
147
+ the image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2.
148
+ We still keep only JPEG images with size greater than
149
+ 400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text
150
+ between 3 and 256 words in the alt-text. We still discard
151
+ candidates with no noun or no determiner, but permit ones
152
+ without prepositions. We discard the heuristics regarding
153
+ high unique-word ratio covering various POS tags and word
154
+ capitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the
155
+ above relaxations, the threshold for counting a word type as
156
+ rare is increased from 5 to 20
157
+
158
+ > The main motivation for CC3M to
159
+ perform text transformation is that a majority of candidate
160
+ captions contain ultrafine-grained entities such as proper
161
+ names (people, venues, locations, etc.), making it extremely
162
+ difficult to learn as part of the image captioning task. In
163
+ contrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability.
164
+ We thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as
165
+ raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy
166
+ of the individuals in these images. For this step, we use the
167
+ Google Cloud Natural Language APIs to detect all named
168
+ entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M
169
+ are transformed in this fashion.
170
+
171
+ #### Who are the source language producers?
172
+
173
+ Not specified.
174
+
175
+ ### Annotations
176
+
177
+ #### Annotation process
178
+
179
+ Annotations are extracted jointly with the images using the automatic pipeline.
180
+
181
+ #### Who are the annotators?
182
+
183
+ Not specified.
184
+
185
+ ### Personal and Sensitive Information
186
+
187
+ From the paper:
188
+
189
+ > The only exception to the “keep alt-texts as
190
+ raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy
191
+ of the individuals in these images. For this step, we use the
192
+ Google Cloud Natural Language APIs to detect all named
193
+ entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M
194
+ are transformed in this fashion.
195
+
196
+ ## Considerations for Using the Data
197
+
198
+ ### Social Impact of Dataset
199
+
200
+ [More Information Needed]
201
+
202
+ ### Discussion of Biases
203
+
204
+ [More Information Needed]
205
+
206
+ ### Other Known Limitations
207
+
208
+ [More Information Needed]
209
+
210
+ ## Additional Information
211
+
212
+ ### Dataset Curators
213
+
214
+ Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut.
215
+
216
+ ### Licensing Information
217
+
218
+ The dataset may be freely used for any purpose, although acknowledgement of
219
+ Google LLC ("Google") as the data source would be appreciated. The dataset is
220
+ provided "AS IS" without any warranty, express or implied. Google disclaims all
221
+ liability for any damages, direct or indirect, resulting from the use of the
222
+ dataset.
223
+
224
+ ### Citation Information
225
+
226
+ ```bibtex
227
+ @inproceedings{changpinyo2021cc12m,
228
+ title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},
229
+ author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
230
+ booktitle = {CVPR},
231
+ year = {2021},
232
+ }
233
+ ```
234
+
235
+ ### Contributions
236
+
237
+ Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
conceptual_12m.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Conceptual 12M dataset."""
16
+
17
+ import datasets
18
+
19
+
20
+ _CITATION = """\
21
+ @inproceedings{changpinyo2021cc12m,
22
+ title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},
23
+ author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
24
+ booktitle = {CVPR},
25
+ year = {2021},
26
+ }
27
+ """
28
+
29
+ _DESCRIPTION = """\
30
+ Conceptual 12M is a large-scale dataset of 12 million
31
+ image-text pairs specifically meant to be used for visionand-language pre-training.
32
+ Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M.
33
+ """
34
+
35
+ _HOMEPAGE = "https://github.com/google-research-datasets/conceptual-12m"
36
+
37
+ _LICENSE = """\
38
+ The dataset may be freely used for any purpose, although acknowledgement of
39
+ Google LLC ("Google") as the data source would be appreciated. The dataset is
40
+ provided "AS IS" without any warranty, express or implied. Google disclaims all
41
+ liability for any damages, direct or indirect, resulting from the use of the
42
+ dataset.
43
+ """
44
+
45
+ _URL = "https://storage.googleapis.com/conceptual_12m/cc12m.tsv"
46
+
47
+
48
+ class Conceptual12M(datasets.GeneratorBasedBuilder):
49
+ """Conceptual 12M dataset."""
50
+
51
+ def _info(self):
52
+ features = datasets.Features({"image_url": datasets.Value("string"), "caption": datasets.Value("string")})
53
+
54
+ return datasets.DatasetInfo(
55
+ description=_DESCRIPTION,
56
+ features=features,
57
+ homepage=_HOMEPAGE,
58
+ license=_LICENSE,
59
+ citation=_CITATION,
60
+ )
61
+
62
+ def _split_generators(self, dl_manager):
63
+ file = dl_manager.download(_URL)
64
+ return [
65
+ datasets.SplitGenerator(
66
+ name=datasets.Split.TRAIN,
67
+ gen_kwargs={
68
+ "file": file,
69
+ },
70
+ ),
71
+ ]
72
+
73
+ def _generate_examples(self, file):
74
+ with open(file, "r", encoding="utf-8") as fi:
75
+ for idx, line in enumerate(fi):
76
+ image_url, caption = line.split("\t", maxsplit=1)
77
+ yield idx, {"image_url": image_url, "caption": caption}
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Conceptual 12M is a large-scale dataset of 12 million\nimage-text pairs specifically meant to be used for visionand-language pre-training.\nIts data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M.\n", "citation": "@inproceedings{changpinyo2021cc12m,\n title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},\n author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},\n booktitle = {CVPR},\n year = {2021},\n}\n", "homepage": "https://github.com/google-research-datasets/conceptual-12m", "license": "The dataset may be freely used for any purpose, although acknowledgement of\nGoogle LLC (\"Google\") as the data source would be appreciated. The dataset is\nprovided \"AS IS\" without any warranty, express or implied. Google disclaims all\nliability for any damages, direct or indirect, resulting from the use of the\ndataset.\n", "features": {"image_url": {"dtype": "string", "id": null, "_type": "Value"}, "caption": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conceptual12_m", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2794168030, "num_examples": 12423374, "dataset_name": "conceptual12_m"}}, "download_checksums": {"https://storage.googleapis.com/conceptual_12m/cc12m.tsv": {"num_bytes": 2707204412, "checksum": "892b549d493c7e75ade10d46c88c9ddabb097790d912b74cfc0ea4ff035ec2c3"}}, "download_size": 2707204412, "post_processing_size": null, "dataset_size": 2794168030, "size_in_bytes": 5501372442}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce71ccc1aa22d708bb0c9764695afafe5dbf9cfd78bcb22159abd14262137c45
3
+ size 801