File size: 7,507 Bytes
217de99
01f714c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
217de99
62533fe
01f714c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa3ecce
01f714c
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
license: cc-by-4.0
task_categories:
- text-classification
- text-generation
- text-retrieval
- summarization
- question-answering
language:
- en
tags:
- social science
- economics
- news
- newspaper
- large language modeling
- nlp
pretty_name: AmericanStories
size_categories:
- 100M<n<1B
---
# Dataset Card for the American Stories dataset

## Dataset Description

- **Homepage:** Coming Soon
- **Repository:** https://github.com/dell-research-harvard/AmericanStories 
- **Paper:** Coming Soon
=- **Point of Contact:** [email protected]

### Dataset Summary

  The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets.

  The dataset was created using a novel deep learning pipeline that incorporates layout detection, legibility classification, custom OCR, and association of article texts spanning multiple bounding boxes. It employs efficient architectures specifically designed for mobile phones to ensure high scalability.

  The American Stories dataset offers high-quality data that can be utilized for various purposes. It can be used to pre-train large language models and improve their understanding of historical English and world knowledge. The dataset can also be integrated into retrieval-augmented language models, making historical information more accessible, including interpretations of political events and details about people's ancestors.

  Additionally, the structured article texts in the dataset enable the use of transformer-based methods for applications such as detecting reproduced content. This significantly enhances accuracy compared to relying solely on existing OCR techniques.

  The American Stories dataset serves as an invaluable resource for developing multimodal layout analysis models and other multimodal applications. Its vast size and silver quality make it ideal for innovation and research in this domain.




### Languages

English (en)

## Dataset Structure
The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has it's own JSON file named as the {scan_id}.json. 
The data loading script takes care of the downloading, extraction and parsing to outputs of two kinds : 

+ Article Level Output : The unit of the Dataset Dict is an associated article  
+ Scan Level Output : The unit of the Dataset Dict is an entire scan with all the raw unparsed data

### Data Instances
Here are some examples of how the output looks like.

#### Article level 
{
  'article_id': '1_1870-01-01_p1_sn82014899_00211105483_1870010101_0773',
  'newspaper_name': 'The weekly Arizona miner.', 
  'edition': '01', 'date': '1870-01-01',
  'page': 'p1',
  'headline': '',
  'byline': '',
  'article': 'PREyors 10 leaving San Francisco for Wash ington City, our Governor, A. r. K. Saford. called upon Generals Thomas and Ord and nt the carrying out of what (truncated)'
} 

#### Scan level
{'raw_data_string': '{"lccn": {"title": "The Massachusetts spy, or, Thomas\'s Boston journal.", "geonames_ids": ["4930956"],....other_keys:values}

### Data Fields


#### Article Level

+ "article_id": Unique Id for an assocaited article
+ "newspaper_name": Newspaper Name
+ "edition": Edition number
+ "date": Date of publication
+ "page": Page number
+ "headline": Headline Text
+ "byline": Byline Text
+ "article": Article Text

#### Scan Level

"raw_data_string": Unparsed scan-level data tha contains scan metadata from Library of Congress, all content regions with their bounding boxes, OCR text and legibility classification


### Data Splits

There are no train, test or val splits. Since the dataset has a massive number of units (articles or newspaper scans), we have split the data by year. Once the dataset is loaded,
instead of the usual way of accessing a split as dataset["train"], specific years can be accessed using the syntax dataset["year"] where year can be any year between 1774-1963 as long as there is at least one scan for the year.


## Dataset Creation

### Curation Rationale

The dataset was created to provide researchers with a large, high-quality corpus of structured and transcribed newspaper article texts from historical local American newspapers. 
These texts provide a massive repository of information about topics ranging from political polarization to the construction of national and cultural identities to the minutiae of the daily lives of people's ancestors. 
The dataset will be useful to a wide variety of researchers including historians, other social scientists, and NLP practitioners.

### Source Data

#### Initial Data Collection and Normalization

The dataset is drawn entirely from image scans in the public domain that are freely available for download from the Library of Congress's website.
We processed all images as described in the associated paper. 

#### Who are the source language producers?

The source language was produced by people - by newspaper editors, columnists, and other sources.

### Annotations

#### Annotation process

Not Applicable

#### Who are the annotators?

Not Applicable

### Personal and Sensitive Information

Not Applicable

## Considerations for Using the Data

### Social Impact of Dataset

 high quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge. 
 The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible.
 Furthermore, structured article texts facilitate using transformer-based methods for popular applications like detection of reproduced content, significantly improving accuracy relative to using the existing OCR.  
 Finally, American Stories provides a massive silver quality dataset for innovating multimodal layout analysis models and other multimodal applications. 
 
### Discussion of Biases

This dataset contains unfiltered content composed by newspaper editors, columnists, and other sources. 
In addition to other potentially harmful content, the corpus may contain factual errors and intentional misrepresentations of news events. 
All content should be viewed as individuals' opinions and not as a purely factual account of events of the day. 

### Other Known Limitations

As a large corpus of news articles, this dataset could hypothetically be used to train a model to generate realistic news articles.
While this would be acceptable use, outputs from this model would need to be carefully labeled as AI generated to avoid confusion and alert users to the possibility of factual errors.
Additionally, this model should not be finetuned to generate toxic content.


## Additional Information

### Dataset Curators

Melissa Dell (Harvard), Jacob Carlson (Harvard), Tom Bryan (Harvard) , Emily Silcock (Harvard), Abhishek Arora (Harvard), Zejiang Shen (MIT), Luca D'Amico-Wong (Harvard), Quan Le (Princeton), Pablo Querubin (NYU), Leander Heldring (Kellog School of Business) 

### Licensing Information

The dataset has a CC-BY 4.0 licese

### Citation Information

Coming Soon

### Contributions

Coming Soon