96abhishekarora commited on
Commit
a92ad3d
1 Parent(s): aa3ecce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -31,18 +31,12 @@ size_categories:
31
  ### Dataset Summary
32
 
33
  The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets.
34
-
35
- The dataset was created using a novel deep learning pipeline that incorporates layout detection, legibility classification, custom OCR, and association of article texts spanning multiple bounding boxes. It employs efficient architectures specifically designed for mobile phones to ensure high scalability.
36
-
37
- The American Stories dataset offers high-quality data that can be utilized for various purposes. It can be used to pre-train large language models and improve their understanding of historical English and world knowledge. The dataset can also be integrated into retrieval-augmented language models, making historical information more accessible, including interpretations of political events and details about people's ancestors.
38
-
39
  Additionally, the structured article texts in the dataset enable the use of transformer-based methods for applications such as detecting reproduced content. This significantly enhances accuracy compared to relying solely on existing OCR techniques.
40
-
41
  The American Stories dataset serves as an invaluable resource for developing multimodal layout analysis models and other multimodal applications. Its vast size and silver quality make it ideal for innovation and research in this domain.
42
 
43
-
44
-
45
-
46
  ### Languages
47
 
48
  English (en)
@@ -51,8 +45,8 @@ English (en)
51
  The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has it's own JSON file named as the {scan_id}.json.
52
  The data loading script takes care of the downloading, extraction and parsing to outputs of two kinds :
53
 
54
- + Article Level Output : The unit of the Dataset Dict is an associated article
55
- + Scan Level Output : The unit of the Dataset Dict is an entire scan with all the raw unparsed data
56
 
57
  ### Data Instances
58
  Here are some examples of how the output looks like.
 
31
  ### Dataset Summary
32
 
33
  The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets.
34
+ It was created using a novel deep learning pipeline that incorporates layout detection, legibility classification, custom OCR, and the association of article texts spanning multiple bounding boxes. It employs efficient architectures specifically designed for mobile phones to ensure high scalability.
35
+ The dataset offers high-quality data that can be utilized for various purposes. It can be used to pre-train large language models and improve their understanding of historical English and world knowledge.
36
+ The dataset can also be integrated into retrieval-augmented language models, making historical information more accessible, including interpretations of political events and details about people's ancestors.
 
 
37
  Additionally, the structured article texts in the dataset enable the use of transformer-based methods for applications such as detecting reproduced content. This significantly enhances accuracy compared to relying solely on existing OCR techniques.
 
38
  The American Stories dataset serves as an invaluable resource for developing multimodal layout analysis models and other multimodal applications. Its vast size and silver quality make it ideal for innovation and research in this domain.
39
 
 
 
 
40
  ### Languages
41
 
42
  English (en)
 
45
  The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has it's own JSON file named as the {scan_id}.json.
46
  The data loading script takes care of the downloading, extraction and parsing to outputs of two kinds :
47
 
48
+ + Article-Level Output: The unit of the Dataset Dict is an associated article
49
+ + Scan Level Output: The unit of the Dataset Dict is an entire scan with all the raw unparsed data
50
 
51
  ### Data Instances
52
  Here are some examples of how the output looks like.