Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
system HF staff commited on
Commit
3234a60
0 Parent(s):

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +231 -0
  3. dataset_infos.json +1 -0
  4. dummy/qasper/0.1.0/dummy_data.zip +3 -0
  5. qasper.py +130 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en-US
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|s2orc
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - closed-domain-qa
20
+ ---
21
+
22
+ # Dataset Card for Qasper
23
+
24
+ ## Table of Contents
25
+ - [Table of Contents](#table-of-contents)
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
38
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
39
+ - [Annotations](#annotations)
40
+ - [Annotation process](#annotation-process)
41
+ - [Who are the annotators?](#who-are-the-annotators)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** [https://allenai.org/data/qasper](https://allenai.org/data/qasper)
56
+ - **Demo:** [https://qasper-demo.apps.allenai.org/](https://qasper-demo.apps.allenai.org/)
57
+ - **Paper:** [https://arxiv.org/abs/2105.03011](https://arxiv.org/abs/2105.03011)
58
+ - **Blogpost:** [https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c](https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c)
59
+ - **Leaderboards:** [https://paperswithcode.com/dataset/qasper](https://paperswithcode.com/dataset/qasper)
60
+
61
+ ### Dataset Summary
62
+
63
+ QASPER is a dataset for question answering on scientific research papers. It consists of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ - `question-answering`: The dataset can be used to train a model for Question Answering. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 33.63 Token F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/question-answering-on-qasper)
68
+
69
+ - `evidence-selection`: The dataset can be used to train a model for Evidence Selection. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 39.37 F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/evidence-selection-on-qasper)
70
+
71
+
72
+ ### Languages
73
+
74
+ English, as it is used in research papers.
75
+
76
+ ## Dataset Structure
77
+
78
+ ### Data Instances
79
+
80
+ A typical instance in the dataset:
81
+
82
+ ```
83
+ {
84
+ 'id': "Paper ID (string)",
85
+ 'title': "Paper Title",
86
+ 'abstract': "paper abstract ...",
87
+ 'full_text': {
88
+ 'paragraphs':[["section1_paragraph1_text","section1_paragraph2_text",...],["section2_paragraph1_text","section2_paragraph2_text",...]],
89
+ 'section_name':["section1_title","section2_title"],...},
90
+ 'qas': {
91
+ 'answers':[{
92
+ 'annotation_id': ["q1_answer1_annotation_id","q1_answer2_annotation_id"]
93
+ 'answer': [{
94
+ 'unanswerable':False,
95
+ 'extractive_spans':["q1_answer1_extractive_span1","q1_answer1_extractive_span2"],
96
+ 'yes_no':False,
97
+ 'free_form_answer':"q1_answer1",
98
+ 'evidence':["q1_answer1_evidence1","q1_answer1_evidence2",..],
99
+ 'highlighted_evidence':["q1_answer1_highlighted_evidence1","q1_answer1_highlighted_evidence2",..]
100
+ },
101
+ {
102
+ 'unanswerable':False,
103
+ 'extractive_spans':["q1_answer2_extractive_span1","q1_answer2_extractive_span2"],
104
+ 'yes_no':False,
105
+ 'free_form_answer':"q1_answer2",
106
+ 'evidence':["q1_answer2_evidence1","q1_answer2_evidence2",..],
107
+ 'highlighted_evidence':["q1_answer2_highlighted_evidence1","q1_answer2_highlighted_evidence2",..]
108
+ }],
109
+ 'worker_id':["q1_answer1_worker_id","q1_answer2_worker_id"]
110
+ },{...["question2's answers"]..},{...["question3's answers"]..}],
111
+ 'question':["question1","question2","question3"...],
112
+ 'question_id':["question1_id","question2_id","question3_id"...],
113
+ 'question_writer':["question1_writer_id","question2_writer_id","question3_writer_id"...],
114
+ 'nlp_background':["question1_writer_nlp_background","question2_writer_nlp_background",...],
115
+ 'topic_background':["question1_writer_topic_background","question2_writer_topic_background",...],
116
+ 'paper_read': ["question1_writer_paper_read_status","question2_writer_paper_read_status",...],
117
+ 'search_query':["question1_search_query","question2_search_query","question3_search_query"...],
118
+ }
119
+ }
120
+ ```
121
+
122
+ ### Data Fields
123
+
124
+ The following is an excerpt from the dataset README:
125
+
126
+ Within "qas", some fields should be obvious. Here is some explanation about the others:
127
+
128
+ #### Fields specific to questions:
129
+
130
+ - "nlp_background" shows the experience the question writer had. The values can be "zero" (no experience), "two" (0 - 2 years of experience), "five" (2 - 5 years of experience), and "infinity" (> 5 years of experience). The field may be empty as well, indicating the writer has chosen not to share this information.
131
+
132
+ - "topic_background" shows how familiar the question writer was with the topic of the paper. The values are "unfamiliar", "familiar", "research" (meaning that the topic is the research area of the writer), or null.
133
+
134
+ - "paper_read", when specified shows whether the questionwriter has read the paper.
135
+
136
+ - "search_query", if not empty, is the query the question writer used to find the abstract of the paper from a large pool of abstracts we made available to them.
137
+
138
+ #### Fields specific to answers
139
+
140
+ Unanswerable answers have "unanswerable" set to true. The remaining answers have exactly one of the following fields being non-empty.
141
+
142
+ - "extractive_spans" are spans in the paper which serve as the answer.
143
+ - "free_form_answer" is a written out answer.
144
+ - "yes_no" is true iff the answer is Yes, and false iff the answer is No.
145
+
146
+ "evidence" is the set of paragraphs, figures or tables used to arrive at the answer. Tables or figures start with the string "FLOAT SELECTED"
147
+
148
+ "highlighted_evidence" is the set of sentences the answer providers selected as evidence if they chose textual evidence. The text in the "evidence" field is a mapping from these sentences to the paragraph level. That is, if you see textual evidence in the "evidence" field, it is guaranteed to be entire paragraphs, while that is not the case with "highlighted_evidence".
149
+
150
+
151
+ ### Data Splits
152
+
153
+ | | Train | Valid |
154
+ | ----- | ------ | ----- |
155
+ | Number of papers | 888 | 281 |
156
+ | Number of questions | 2593 | 1005 |
157
+ | Number of answers | 2675 | 1764 |
158
+
159
+ ## Dataset Creation
160
+
161
+ ### Curation Rationale
162
+
163
+ [More Information Needed]
164
+
165
+ ### Source Data
166
+
167
+ NLP papers: The full text of the papers is extracted from [S2ORC](https://huggingface.co/datasets/s2orc) (Lo et al., 2020)
168
+
169
+ #### Initial Data Collection and Normalization
170
+
171
+ [More Information Needed]
172
+
173
+ #### Who are the source language producers?
174
+
175
+ [More Information Needed]
176
+
177
+ ### Annotations
178
+
179
+ [More Information Needed]
180
+
181
+ #### Annotation process
182
+
183
+ [More Information Needed]
184
+
185
+ #### Who are the annotators?
186
+
187
+ "The annotators are NLP practitioners, not
188
+ expert researchers, and it is likely that an expert
189
+ would score higher"
190
+
191
+ ### Personal and Sensitive Information
192
+
193
+ [More Information Needed]
194
+
195
+ ## Considerations for Using the Data
196
+
197
+ ### Social Impact of Dataset
198
+
199
+ [More Information Needed]
200
+
201
+ ### Discussion of Biases
202
+
203
+ [More Information Needed]
204
+
205
+ ### Other Known Limitations
206
+
207
+ [More Information Needed]
208
+
209
+ ## Additional Information
210
+
211
+ ### Dataset Curators
212
+
213
+ Crowdsourced NLP practitioners
214
+
215
+ ### Licensing Information
216
+
217
+ [CC BY 4.0](https://creativecommons.org/licenses/by/4.0)
218
+
219
+ ### Citation Information
220
+
221
+ ```
222
+ @inproceedings{Dasigi2021ADO,
223
+ title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
224
+ author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},
225
+ year={2021}
226
+ }
227
+ ```
228
+
229
+ ### Contributions
230
+
231
+ Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"qasper": {"description": "A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.\n", "citation": "@inproceedings{Dasigi2021ADO,\n title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},\n author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},\n year={2021}\n}\n", "homepage": "https://allenai.org/data/qasper", "license": "CC BY 4.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "abstract": {"dtype": "string", "id": null, "_type": "Value"}, "full_text": {"feature": {"section_name": {"dtype": "string", "id": null, "_type": "Value"}, "paragraphs": [{"dtype": "string", "id": null, "_type": "Value"}]}, "length": -1, "id": null, "_type": "Sequence"}, "qas": {"feature": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "question_id": {"dtype": "string", "id": null, "_type": "Value"}, "nlp_background": {"dtype": "string", "id": null, "_type": "Value"}, "topic_background": {"dtype": "string", "id": null, "_type": "Value"}, "paper_read": {"dtype": "string", "id": null, "_type": "Value"}, "search_query": {"dtype": "string", "id": null, "_type": "Value"}, "question_writer": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"answer": {"unanswerable": {"dtype": "bool", "id": null, "_type": "Value"}, "extractive_spans": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "yes_no": {"dtype": "bool", "id": null, "_type": "Value"}, "free_form_answer": {"dtype": "string", "id": null, "_type": "Value"}, "evidence": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "highlighted_evidence": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "annotation_id": {"dtype": "string", "id": null, "_type": "Value"}, "worker_id": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "qasper", "config_name": "qasper", "version": {"version_str": "0.1.0", "description": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 27277970, "num_examples": 888, "dataset_name": "qasper"}, "validation": {"name": "validation", "num_bytes": 9535330, "num_examples": 281, "dataset_name": "qasper"}}, "download_checksums": {"https://qasper-dataset.s3-us-west-2.amazonaws.com/qasper-train-dev-v0.1.tgz": {"num_bytes": 10359737, "checksum": "cd0cb8911342966fcc3eb91947af149cb7cf80b4f253ff9a6f0333f4752080dd"}}, "download_size": 10359737, "post_processing_size": null, "dataset_size": 36813300, "size_in_bytes": 47173037}}
dummy/qasper/0.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b220315af309990e51221f07dfac6fad8b43b00b7d8f267b1f555c797bb5c2e
3
+ size 15066
qasper.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Qasper: A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers."""
18
+
19
+
20
+ import json
21
+ import os
22
+
23
+ import datasets
24
+
25
+
26
+ logger = datasets.logging.get_logger(__name__)
27
+
28
+
29
+ _CITATION = """\
30
+ @inproceedings{Dasigi2021ADO,
31
+ title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
32
+ author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},
33
+ year={2021}
34
+ }
35
+ """
36
+ _LICENSE = "CC BY 4.0"
37
+ _DESCRIPTION = """\
38
+ A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.
39
+ """
40
+
41
+ _HOMEPAGE = "https://allenai.org/data/qasper"
42
+ _DOWNLOAD_URLS = {"data": "https://qasper-dataset.s3-us-west-2.amazonaws.com/qasper-train-dev-v0.1.tgz"}
43
+ data_files = {"train": "qasper-train-v0.1.json", "dev": "qasper-dev-v0.1.json"}
44
+
45
+ _VERSION = "0.1.0"
46
+
47
+
48
+ class Qasper(datasets.GeneratorBasedBuilder):
49
+ """Qasper: A Dataset of Information-Seeking Q&A Anchored in Research Papers."""
50
+
51
+ BUILDER_CONFIGS = [
52
+ datasets.BuilderConfig(
53
+ name="qasper",
54
+ version=datasets.Version(_VERSION),
55
+ description=_DESCRIPTION,
56
+ )
57
+ ]
58
+
59
+ def _info(self):
60
+
61
+ features = datasets.Features(
62
+ {
63
+ "id": datasets.Value("string"),
64
+ "title": datasets.Value("string"),
65
+ "abstract": datasets.Value("string"),
66
+ "full_text": datasets.features.Sequence(
67
+ {
68
+ "section_name": datasets.Value("string"),
69
+ "paragraphs": [datasets.Value("string")],
70
+ }
71
+ ),
72
+ "qas": datasets.features.Sequence(
73
+ {
74
+ "question": datasets.Value("string"),
75
+ "question_id": datasets.Value("string"),
76
+ "nlp_background": datasets.Value("string"),
77
+ "topic_background": datasets.Value("string"),
78
+ "paper_read": datasets.Value("string"),
79
+ "search_query": datasets.Value("string"),
80
+ "question_writer": datasets.Value("string"),
81
+ "answers": datasets.features.Sequence(
82
+ {
83
+ "answer": {
84
+ "unanswerable": datasets.Value("bool"),
85
+ "extractive_spans": datasets.features.Sequence(datasets.Value("string")),
86
+ "yes_no": datasets.Value("bool"),
87
+ "free_form_answer": datasets.Value("string"),
88
+ "evidence": datasets.features.Sequence(datasets.Value("string")),
89
+ "highlighted_evidence": datasets.features.Sequence(datasets.Value("string")),
90
+ },
91
+ "annotation_id": datasets.Value("string"),
92
+ "worker_id": datasets.Value("string"),
93
+ }
94
+ ),
95
+ }
96
+ ),
97
+ }
98
+ )
99
+
100
+ return datasets.DatasetInfo(
101
+ description=_DESCRIPTION,
102
+ features=features,
103
+ supervised_keys=None,
104
+ homepage=_HOMEPAGE,
105
+ license=_LICENSE,
106
+ citation=_CITATION,
107
+ )
108
+
109
+ def _split_generators(self, dl_manager):
110
+ downloaded_files = dl_manager.download_and_extract(_DOWNLOAD_URLS)
111
+
112
+ return [
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.TRAIN,
115
+ gen_kwargs={"filepath": os.path.join(downloaded_files["data"], data_files["train"])},
116
+ ),
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.VALIDATION,
119
+ gen_kwargs={"filepath": os.path.join(downloaded_files["data"], data_files["dev"])},
120
+ ),
121
+ ]
122
+
123
+ def _generate_examples(self, filepath):
124
+ """This function returns the examples in the raw (text) form."""
125
+ logger.info("generating examples from = %s", filepath)
126
+ with open(filepath, encoding="utf-8") as f:
127
+ qasper = json.load(f)
128
+ for id_ in qasper:
129
+ qasper[id_]["id"] = id_
130
+ yield id_, qasper[id_]