96abhishekarora
commited on
Commit
•
533583d
1
Parent(s):
a92ad3d
Update README.md
Browse files
README.md
CHANGED
@@ -42,16 +42,18 @@ size_categories:
|
|
42 |
English (en)
|
43 |
|
44 |
## Dataset Structure
|
45 |
-
The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has
|
46 |
-
The data loading script takes care of the downloading, extraction and parsing to outputs of two kinds :
|
47 |
|
48 |
+ Article-Level Output: The unit of the Dataset Dict is an associated article
|
49 |
+ Scan Level Output: The unit of the Dataset Dict is an entire scan with all the raw unparsed data
|
50 |
|
51 |
### Data Instances
|
52 |
-
Here are some examples of
|
53 |
|
54 |
#### Article level
|
|
|
|
|
55 |
{
|
56 |
'article_id': '1_1870-01-01_p1_sn82014899_00211105483_1870010101_0773',
|
57 |
'newspaper_name': 'The weekly Arizona miner.',
|
@@ -60,17 +62,21 @@ Here are some examples of how the output looks like.
|
|
60 |
'headline': '',
|
61 |
'byline': '',
|
62 |
'article': 'PREyors 10 leaving San Francisco for Wash ington City, our Governor, A. r. K. Saford. called upon Generals Thomas and Ord and nt the carrying out of what (truncated)'
|
63 |
-
}
|
|
|
64 |
|
65 |
#### Scan level
|
|
|
|
|
66 |
{'raw_data_string': '{"lccn": {"title": "The Massachusetts spy, or, Thomas\'s Boston journal.", "geonames_ids": ["4930956"],....other_keys:values}
|
|
|
67 |
|
68 |
### Data Fields
|
69 |
|
70 |
|
71 |
#### Article Level
|
72 |
|
73 |
-
+ "article_id": Unique Id for an
|
74 |
+ "newspaper_name": Newspaper Name
|
75 |
+ "edition": Edition number
|
76 |
+ "date": Date of publication
|
@@ -81,13 +87,44 @@ Here are some examples of how the output looks like.
|
|
81 |
|
82 |
#### Scan Level
|
83 |
|
84 |
-
"raw_data_string": Unparsed scan-level data
|
85 |
|
86 |
|
87 |
### Data Splits
|
88 |
|
89 |
There are no train, test or val splits. Since the dataset has a massive number of units (articles or newspaper scans), we have split the data by year. Once the dataset is loaded,
|
90 |
instead of the usual way of accessing a split as dataset["train"], specific years can be accessed using the syntax dataset["year"] where year can be any year between 1774-1963 as long as there is at least one scan for the year.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
|
93 |
## Dataset Creation
|
@@ -127,10 +164,10 @@ Not Applicable
|
|
127 |
|
128 |
### Social Impact of Dataset
|
129 |
|
130 |
-
high
|
131 |
The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible.
|
132 |
-
Furthermore, structured article texts facilitate using transformer-based methods for popular applications like detection of reproduced content, significantly improving accuracy relative to using the existing OCR.
|
133 |
-
|
134 |
|
135 |
### Discussion of Biases
|
136 |
|
@@ -141,7 +178,7 @@ All content should be viewed as individuals' opinions and not as a purely factua
|
|
141 |
### Other Known Limitations
|
142 |
|
143 |
As a large corpus of news articles, this dataset could hypothetically be used to train a model to generate realistic news articles.
|
144 |
-
While this would be acceptable use, outputs from this model would need to be carefully labeled as AI
|
145 |
Additionally, this model should not be finetuned to generate toxic content.
|
146 |
|
147 |
|
@@ -153,7 +190,7 @@ Melissa Dell (Harvard), Jacob Carlson (Harvard), Tom Bryan (Harvard) , Emily Sil
|
|
153 |
|
154 |
### Licensing Information
|
155 |
|
156 |
-
The dataset has a CC-BY 4.0
|
157 |
|
158 |
### Citation Information
|
159 |
|
|
|
42 |
English (en)
|
43 |
|
44 |
## Dataset Structure
|
45 |
+
The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has its own JSON file named as the {scan_id}.json.
|
46 |
+
The data loading script takes care of the downloading, extraction, and parsing to outputs of two kinds :
|
47 |
|
48 |
+ Article-Level Output: The unit of the Dataset Dict is an associated article
|
49 |
+ Scan Level Output: The unit of the Dataset Dict is an entire scan with all the raw unparsed data
|
50 |
|
51 |
### Data Instances
|
52 |
+
Here are some examples of what the output looks like.
|
53 |
|
54 |
#### Article level
|
55 |
+
|
56 |
+
```
|
57 |
{
|
58 |
'article_id': '1_1870-01-01_p1_sn82014899_00211105483_1870010101_0773',
|
59 |
'newspaper_name': 'The weekly Arizona miner.',
|
|
|
62 |
'headline': '',
|
63 |
'byline': '',
|
64 |
'article': 'PREyors 10 leaving San Francisco for Wash ington City, our Governor, A. r. K. Saford. called upon Generals Thomas and Ord and nt the carrying out of what (truncated)'
|
65 |
+
}
|
66 |
+
```
|
67 |
|
68 |
#### Scan level
|
69 |
+
|
70 |
+
```
|
71 |
{'raw_data_string': '{"lccn": {"title": "The Massachusetts spy, or, Thomas\'s Boston journal.", "geonames_ids": ["4930956"],....other_keys:values}
|
72 |
+
```
|
73 |
|
74 |
### Data Fields
|
75 |
|
76 |
|
77 |
#### Article Level
|
78 |
|
79 |
+
+ "article_id": Unique Id for an associated article
|
80 |
+ "newspaper_name": Newspaper Name
|
81 |
+ "edition": Edition number
|
82 |
+ "date": Date of publication
|
|
|
87 |
|
88 |
#### Scan Level
|
89 |
|
90 |
+
"raw_data_string": Unparsed scan-level data that contains scan metadata from Library of Congress, all content regions with their bounding boxes, OCR text and legibility classification
|
91 |
|
92 |
|
93 |
### Data Splits
|
94 |
|
95 |
There are no train, test or val splits. Since the dataset has a massive number of units (articles or newspaper scans), we have split the data by year. Once the dataset is loaded,
|
96 |
instead of the usual way of accessing a split as dataset["train"], specific years can be accessed using the syntax dataset["year"] where year can be any year between 1774-1963 as long as there is at least one scan for the year.
|
97 |
+
The data loading script provides options to download both a subset of years and all years at a time.
|
98 |
+
|
99 |
+
### Accessing the Data
|
100 |
+
|
101 |
+
There are 4 config options that can be used to access the data depending upon the use-case.
|
102 |
+
|
103 |
+
```
|
104 |
+
from datasets import load_dataset
|
105 |
+
|
106 |
+
# Download data for the year 1809 at the associated article level (Default)
|
107 |
+
dataset = load_dataset("dell-research-harvard/AmericanStories",
|
108 |
+
"subset_years",
|
109 |
+
year_list=["1809", "1810"]
|
110 |
+
)
|
111 |
+
|
112 |
+
# Download and process data for all years at the article level
|
113 |
+
dataset = load_dataset("dell-research-harvard/AmericanStories",
|
114 |
+
"all_years"
|
115 |
+
)
|
116 |
+
|
117 |
+
# Download and process data for 1809 at the scan level
|
118 |
+
dataset = load_dataset("dell-research-harvard/AmericanStories",
|
119 |
+
"subset_years_content_regions",
|
120 |
+
year_list=["1809"]
|
121 |
+
)
|
122 |
+
|
123 |
+
# Download ad process data for all years at the scan level
|
124 |
+
dataset = load_dataset("dell-research-harvard/AmericanStories",
|
125 |
+
"all_years_content_regions")
|
126 |
+
|
127 |
+
```
|
128 |
|
129 |
|
130 |
## Dataset Creation
|
|
|
164 |
|
165 |
### Social Impact of Dataset
|
166 |
|
167 |
+
This dataset provides high-quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge.
|
168 |
The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible.
|
169 |
+
Furthermore, structured article texts that it provides can facilitate using transformer-based methods for popular applications like detection of reproduced content, significantly improving accuracy relative to using the existing OCR.
|
170 |
+
It can also be used for innovating multimodal layout analysis models and other multimodal applications.
|
171 |
|
172 |
### Discussion of Biases
|
173 |
|
|
|
178 |
### Other Known Limitations
|
179 |
|
180 |
As a large corpus of news articles, this dataset could hypothetically be used to train a model to generate realistic news articles.
|
181 |
+
While this would be acceptable use, outputs from this model would need to be carefully labeled as AI-generated to avoid confusion and alert users to the possibility of factual errors.
|
182 |
Additionally, this model should not be finetuned to generate toxic content.
|
183 |
|
184 |
|
|
|
190 |
|
191 |
### Licensing Information
|
192 |
|
193 |
+
The dataset has a CC-BY 4.0 license
|
194 |
|
195 |
### Citation Information
|
196 |
|