96abhishekarora commited on
Commit
01f714c
1 Parent(s): 3484aca

Updated Dataset Card

Browse files
Files changed (1) hide show
  1. README.md +172 -1
README.md CHANGED
@@ -1,3 +1,174 @@
1
  ---
2
- license: cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-classification
5
+ - text-generation
6
+ - text-retrieval
7
+ - summarization
8
+ - question-answering
9
+ - information-retrieval
10
+ - multimodal
11
+ - layout-analysis
12
+ language:
13
+ - en
14
+ tags:
15
+ - social science
16
+ - economics
17
+ - news
18
+ - newspaper
19
+ - large language modeling
20
+ - nlp
21
+ pretty_name: AmericanStories
22
+ size_categories:
23
+ - 100M<n<1B
24
  ---
25
+ # Dataset Card for AmericanStories
26
+
27
+ ## Dataset Description
28
+
29
+ - **Homepage:** Coming Soon
30
+ - **Repository:** https://github.com/dell-research-harvard/AmericanStories
31
+ - **Paper:** Coming Soon
32
+ =- **Point of Contact:** [email protected]
33
+
34
+ ### Dataset Summary
35
+
36
+ The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets.
37
+
38
+ The dataset was created using a novel deep learning pipeline that incorporates layout detection, legibility classification, custom OCR, and association of article texts spanning multiple bounding boxes. It employs efficient architectures specifically designed for mobile phones to ensure high scalability.
39
+
40
+ The American Stories dataset offers high-quality data that can be utilized for various purposes. It can be used to pre-train large language models and improve their understanding of historical English and world knowledge. The dataset can also be integrated into retrieval-augmented language models, making historical information more accessible, including interpretations of political events and details about people's ancestors.
41
+
42
+ Additionally, the structured article texts in the dataset enable the use of transformer-based methods for applications such as detecting reproduced content. This significantly enhances accuracy compared to relying solely on existing OCR techniques.
43
+
44
+ The American Stories dataset serves as an invaluable resource for developing multimodal layout analysis models and other multimodal applications. Its vast size and silver quality make it ideal for innovation and research in this domain.
45
+
46
+
47
+
48
+
49
+ ### Languages
50
+
51
+ English (en)
52
+
53
+ ## Dataset Structure
54
+ The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has it's own JSON file named as the {scan_id}.json.
55
+ The data loading script takes care of the downloading, extraction and parsing to outputs of two kinds :
56
+
57
+ + Article Level Output : The unit of the Dataset Dict is an associated article
58
+ + Scan Level Output : The unit of the Dataset Dict is an entire scan with all the raw unparsed data
59
+
60
+ ### Data Instances
61
+ Here are some examples of how the output looks like.
62
+
63
+ #### Article level
64
+ {
65
+ 'article_id': '1_1870-01-01_p1_sn82014899_00211105483_1870010101_0773',
66
+ 'newspaper_name': 'The weekly Arizona miner.',
67
+ 'edition': '01', 'date': '1870-01-01',
68
+ 'page': 'p1',
69
+ 'headline': '',
70
+ 'byline': '',
71
+ 'article': 'PREyors 10 leaving San Francisco for Wash ington City, our Governor, A. r. K. Saford. called upon Generals Thomas and Ord and nt the carrying out of what (truncated)'
72
+ }
73
+
74
+ #### Scan level
75
+ {'raw_data_string': '{"lccn": {"title": "The Massachusetts spy, or, Thomas\'s Boston journal.", "geonames_ids": ["4930956"],....other_keys:values}
76
+
77
+ ### Data Fields
78
+
79
+
80
+ #### Article Level
81
+
82
+ + "article_id": Unique Id for an assocaited article
83
+ + "newspaper_name": Newspaper Name
84
+ + "edition": Edition number
85
+ + "date": Date of publication
86
+ + "page": Page number
87
+ + "headline": Headline Text
88
+ + "byline": Byline Text
89
+ + "article": Article Text
90
+
91
+ #### Scan Level
92
+
93
+ "raw_data_string": Unparsed scan-level data tha contains scan metadata from Library of Congress, all content regions with their bounding boxes, OCR text and legibility classification
94
+
95
+
96
+ ### Data Splits
97
+
98
+ There are no train, test or val splits. Since the dataset has a massive number of units (articles or newspaper scans), we have split the data by year. Once the dataset is loaded,
99
+ instead of the usual way of accessing a split as dataset["train"], specific years can be accessed using the syntax dataset["year"] where year can be any year between 1774-1963 as long as there is at least one scan for the year.
100
+
101
+
102
+ ## Dataset Creation
103
+
104
+ ### Curation Rationale
105
+
106
+ The dataset was created to provide researchers with a large, high-quality corpus of structured and transcribed newspaper article texts from historical local American newspapers.
107
+ These texts provide a massive repository of information about topics ranging from political polarization to the construction of national and cultural identities to the minutiae of the daily lives of people's ancestors.
108
+ The dataset will be useful to a wide variety of researchers including historians, other social scientists, and NLP practitioners.
109
+
110
+ ### Source Data
111
+
112
+ #### Initial Data Collection and Normalization
113
+
114
+ The dataset is drawn entirely from image scans in the public domain that are freely available for download from the Library of Congress's website.
115
+ We processed all images as described in the associated paper.
116
+
117
+ #### Who are the source language producers?
118
+
119
+ The source language was produced by people - by newspaper editors, columnists, and other sources.
120
+
121
+ ### Annotations
122
+
123
+ #### Annotation process
124
+
125
+ Not Applicable
126
+
127
+ #### Who are the annotators?
128
+
129
+ Not Applicable
130
+
131
+ ### Personal and Sensitive Information
132
+
133
+ Not Applicable
134
+
135
+ ## Considerations for Using the Data
136
+
137
+ ### Social Impact of Dataset
138
+
139
+ high quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge.
140
+ The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible.
141
+ Furthermore, structured article texts facilitate using transformer-based methods for popular applications like detection of reproduced content, significantly improving accuracy relative to using the existing OCR.
142
+ Finally, American Stories provides a massive silver quality dataset for innovating multimodal layout analysis models and other multimodal applications.
143
+
144
+ ### Discussion of Biases
145
+
146
+ This dataset contains unfiltered content composed by newspaper editors, columnists, and other sources.
147
+ In addition to other potentially harmful content, the corpus may contain factual errors and intentional misrepresentations of news events.
148
+ All content should be viewed as individuals' opinions and not as a purely factual account of events of the day.
149
+
150
+ ### Other Known Limitations
151
+
152
+ As a large corpus of news articles, this dataset could hypothetically be used to train a model to generate realistic news articles.
153
+ While this would be acceptable use, outputs from this model would need to be carefully labeled as AI generated to avoid confusion and alert users to the possibility of factual errors.
154
+ Additionally, this model should not be finetuned to generate toxic content.
155
+
156
+
157
+ ## Additional Information
158
+
159
+ ### Dataset Curators
160
+
161
+ Melissa Dell (Harvard), Jacob Carlson (Harvard), Tom Bryan (Harvard) , Emily Silcock (Harvard), Abhishek Arora (Harvard), Zejiang Shen (MIT), Luca D'Amico-Wong (Harvard), Quan Le (Princeton), Pablo Querubin (NYU), Leander Heldring (Kellog School of Business) \\
162
+
163
+ ### Licensing Information
164
+
165
+ The dataset has a CC-BY 4.0 licese
166
+
167
+ ### Citation Information
168
+
169
+ Coming Soon
170
+
171
+ ### Contributions
172
+
173
+ Coming Soon
174
+