Datasets:

Languages:
English
License:
phlobo commited on
Commit
1c12d6b
1 Parent(s): e0ca639

Update bioasq_task_b based on git version c0b0d85

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ bigbio_language:
5
+ - English
6
+ license: other
7
+ multilinguality: monolingual
8
+ bigbio_license_shortname: NLM_LICENSE
9
+ pretty_name: BioASQ Task B
10
+ homepage: http://participants-area.bioasq.org/datasets/
11
+ bigbio_pubmed: true
12
+ bigbio_public: false
13
+ bigbio_tasks:
14
+ - QUESTION_ANSWERING
15
+ ---
16
+
17
+
18
+ # Dataset Card for BioASQ Task B
19
+
20
+ ## Dataset Description
21
+
22
+ - **Homepage:** http://participants-area.bioasq.org/datasets/
23
+ - **Pubmed:** True
24
+ - **Public:** False
25
+ - **Tasks:** QA
26
+
27
+
28
+ The BioASQ corpus contains multiple question
29
+ answering tasks annotated by biomedical experts, including yes/no, factoid, list,
30
+ and summary questions. Pertaining to our objective of comparing neural language
31
+ models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
32
+ of other tasks to future work. Each question is paired with a reference text
33
+ containing multiple sentences from a PubMed abstract and a yes/no answer. We use
34
+ the official train/dev/test split of 670/75/140 questions.
35
+
36
+ See 'Domain-Specific Language Model Pretraining for Biomedical
37
+ Natural Language Processing'
38
+
39
+
40
+ ## Citation Information
41
+
42
+ ```
43
+ @article{tsatsaronis2015overview,
44
+ title = {
45
+ An overview of the BIOASQ large-scale biomedical semantic indexing and
46
+ question answering competition
47
+ },
48
+ author = {
49
+ Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
50
+ and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
51
+ Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
52
+ Polychronopoulos, Dimitris and others
53
+ },
54
+ year = 2015,
55
+ journal = {BMC bioinformatics},
56
+ publisher = {BioMed Central Ltd},
57
+ volume = 16,
58
+ number = 1,
59
+ pages = 138
60
+ }
61
+ ```
.ipynb_checkpoints/bioasq_task_b-checkpoint.py ADDED
@@ -0,0 +1,819 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ BioASQ Task B On Biomedical Semantic QA (Involves IR, QA, Summarization qnd
17
+ More). This task uses benchmark datasets containing development and test
18
+ questions, in English, along with gold standard (reference) answers constructed
19
+ by a team of biomedical experts. The participants have to respond with relevant
20
+ concepts, articles, snippets and RDF triples, from designated resources, as well
21
+ as exact and 'ideal' answers.
22
+
23
+ Fore more information about the challenge, the organisers and the relevant
24
+ publications please visit: http://bioasq.org/
25
+ """
26
+ import glob
27
+ import json
28
+ import os
29
+ import re
30
+
31
+ import datasets
32
+
33
+ from .bigbiohub import BigBioConfig, Tasks, qa_features
34
+
35
+ _LANGUAGES = ["English"]
36
+ _PUBMED = True
37
+ _LOCAL = True
38
+ _CITATION = """\
39
+ @article{tsatsaronis2015overview,
40
+ title = {
41
+ An overview of the BIOASQ large-scale biomedical semantic indexing and
42
+ question answering competition
43
+ },
44
+ author = {
45
+ Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
46
+ and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
47
+ Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
48
+ Polychronopoulos, Dimitris and others
49
+ },
50
+ year = 2015,
51
+ journal = {BMC bioinformatics},
52
+ publisher = {BioMed Central Ltd},
53
+ volume = 16,
54
+ number = 1,
55
+ pages = 138
56
+ }
57
+ """
58
+
59
+ _DATASETNAME = "bioasq_task_b"
60
+ _DISPLAYNAME = "BioASQ Task B"
61
+
62
+ _BIOASQ_11B_DESCRIPTION = """\
63
+ The data are intended to be used as training and development data for BioASQ
64
+ 11, which will take place during 2023. There is one file containing the data:
65
+ - training11b.json
66
+
67
+ The file contains the data of the first ten editions of the challenge: 4719
68
+ questions [1] with their relevant documents, snippets, concepts and RDF
69
+ triples, exact and ideal answers.
70
+
71
+ Differences with BioASQ-training10b.json
72
+ - 485 new questions added from BioASQ10
73
+ - The question with id 621ecf1a3a8413c653000061 had identical body with
74
+ 5ac0a36f19833b0d7b000002. All relevant elements from both questions
75
+ are available in the merged question with id 5ac0a36f19833b0d7b000002.
76
+
77
+ [1] The distribution of 4719 questions : 1417 factoid, 1271 yesno, 1130 summary, 901 list
78
+ """
79
+
80
+ _BIOASQ_10B_DESCRIPTION = """\
81
+ The data are intended to be used as training and development data for BioASQ
82
+ 10, which will take place during 2022. There is one file containing the data:
83
+ - training10b.json
84
+
85
+ The file contains the data of the first nine editions of the challenge: 4234
86
+ questions [1] with their relevant documents, snippets, concepts and RDF
87
+ triples, exact and ideal answers.
88
+
89
+ Differences with BioASQ-training9b.json
90
+ - 492 new questions added from BioASQ9
91
+ - The question with id 56c1f01eef6e394741000046 had identical body with
92
+ 602498cb1cb411341a00009e. All relevant elements from both questions
93
+ are available in the merged question with id 602498cb1cb411341a00009e.
94
+ - The question with id 5c7039207c78d69471000065 had identical body with
95
+ 601c317a1cb411341a000014. All relevant elements from both questions
96
+ are available in the merged question with id 601c317a1cb411341a000014.
97
+ - The question with id 5e4b540b6d0a27794100001c had identical body with
98
+ 602828b11cb411341a0000fc. All relevant elements from both questions
99
+ are available in the merged question with id 602828b11cb411341a0000fc.
100
+ - The question with id 5fdb42fba43ad31278000027 had identical body with
101
+ 5d35eb01b3a638076300000f. All relevant elements from both questions
102
+ are available in the merged question with id 5d35eb01b3a638076300000f.
103
+ - The question with id 601d76311cb411341a000045 had identical body with
104
+ 6060732b94d57fd87900003d. All relevant elements from both questions
105
+ are available in the merged question with id 6060732b94d57fd87900003d.
106
+
107
+ [1] 4234 questions : 1252 factoid, 1148 yesno, 1018 summary, 816 list
108
+ """
109
+
110
+ _BIOASQ_9B_DESCRIPTION = """\
111
+ The data are intended to be used as training and development data for BioASQ 9,
112
+ which will take place during 2021. There is one file containing the data:
113
+ - training9b.json
114
+
115
+ The file contains the data of the first seven editions of the challenge: 3742
116
+ questions [1] with their relevant documents, snippets, concepts and RDF triples,
117
+ exact and ideal answers.
118
+
119
+ Differences with BioASQ-training8b.json
120
+ - 499 new questions added from BioASQ8
121
+ - The question with id 5e30e689fbd6abf43b00003a had identical body with
122
+ 5880e417713cbdfd3d000001. All relevant elements from both questions
123
+ are available in the merged question with id 5880e417713cbdfd3d000001.
124
+
125
+ [1] 3742 questions : 1091 factoid, 1033 yesno, 899 summary, 719 list
126
+ """
127
+
128
+ _BIOASQ_8B_DESCRIPTION = """\
129
+ The data are intended to be used as training and development data for BioASQ 8,
130
+ which will take place during 2020. There is one file containing the data:
131
+ - training8b.json
132
+
133
+ The file contains the data of the first seven editions of the challenge: 3243
134
+ questions [1] with their relevant documents, snippets, concepts and RDF triples,
135
+ exact and ideal answers.
136
+
137
+ Differences with BioASQ-training7b.json
138
+ - 500 new questions added from BioASQ7
139
+ - 4 questions were removed
140
+ - The question with id 5717fb557de986d80d000009 had identical body with
141
+ 571e06447de986d80d000016. All relevant elements from both questions
142
+ are available in the merged question with id 571e06447de986d80d000016.
143
+ - The question with id 5c589ddb86df2b917400000b had identical body with
144
+ 5c6b7a9e7c78d69471000029. All relevant elements from both questions
145
+ are available in the merged question with id 5c6b7a9e7c78d69471000029.
146
+ - The question with id 52ffb5d12059c6d71c00007c had identical body with
147
+ 52e7870a98d023950500001a. All relevant elements from both questions
148
+ are available in the merged question with id 52e7870a98d023950500001a.
149
+ - The question with id 53359338d6d3ac6a3400004f had identical body with
150
+ 589a246878275d0c4a000030. All relevant elements from both questions
151
+ are available in the merged question with id 589a246878275d0c4a000030.
152
+
153
+ **** UPDATE 25/02/2020 *****
154
+ The previous version of the dataset contained an inconsistency on question with
155
+ id "5c9904eaecadf2e73f00002e", where the "ideal_answer" field was missing.
156
+ This has been fixed.
157
+ """
158
+
159
+ _BIOASQ_7B_DESCRIPTION = """\
160
+ The data are intended to be used as training and development data for BioASQ 7,
161
+ which will take place during 2019. There is one file containing the data:
162
+ - BioASQ-trainingDataset7b.json
163
+
164
+ The file contains the data of the first six editions of the challenge: 2747
165
+ questions [1] with their relevant documents, snippets, concepts and RDF triples,
166
+ exact and ideal answers.
167
+
168
+ Differences with BioASQ-trainingDataset6b.json
169
+ - 500 new questions added from BioASQ6
170
+ - 4 questions were removed
171
+ - The question with id 569ed752ceceede94d000004 had identical body with
172
+ a new question from BioASQ6. All relevant elements from both questions
173
+ are available in the merged question with id 5abd31e0fcf456587200002c
174
+ - 3 questions were removed as incomplete: 54d643023706e89528000007,
175
+ 532819afd6d3ac6a3400000f, 517545168ed59a060a00002b
176
+ - 4 questions were revised for various confusions that have been identified
177
+ - In 2 questions the ideal answer has been revised :
178
+ 51406e6223fec90375000009, 5172f8118ed59a060a000019
179
+ - In 4 questions the snippets and documents list has been revised :
180
+ 51406e6223fec90375000009, 5172f8118ed59a060a000019,
181
+ 51593dc8d24251bc05000099, 5158a5b8d24251bc05000097
182
+ - In 198 questions the documents list has updated with missing
183
+ documents from the relevant snippets list. [2]
184
+
185
+ [1] 2747 questions : 779 factoid, 745 yesno, 667 summary, 556 list
186
+ [2] 55031181e9bde69634000014, 51406e6223fec90375000009, 54d643023706e89528000007,
187
+ 52bf1b0a03868f1b06000009, 52bf19c503868f1b06000001, 51593dc8d24251bc05000099,
188
+ 530a5117970c65fa6b000007, 553a8d78f321868558000003, 531a3fe3b166e2b806000038,
189
+ 532819afd6d3ac6a3400000f, 5158a5b8d24251bc05000097, 553653a5bc4f83e828000007,
190
+ 535d2cf09a4572de6f000004, 53386282d6d3ac6a3400005a, 517a8ce98ed59a060a000045,
191
+ 55391ce8bc4f83e828000018, 5547d700f35db75526000007, 5713bf261174fb1755000011,
192
+ 6f15c5a2ac5ed1459000012, 52b2e498f828ad283c000010, 570a7594cf1c325851000026,
193
+ 530cefaaad0bf1360c000012, 530f685c329f5fcf1e000002, 550c4011a103b78016000009,
194
+ 552faababc4f83e828000005, 54cf48acf693c3b16b00000b, 550313aae9bde6963400001f,
195
+ 551177626a8cde6b72000005, 54eded8c94afd6150400000c, 550c3754a103b78016000007,
196
+ 56f555b609dd18d46b000007, 54c26e29f693c3b16b000003, 54da0c524b1fd0d33c00000b,
197
+ 52bf1d3c03868f1b0600000d, 5343bdd6aeec6fbd07000001, 52cb9b9b03868f1b0600002d,
198
+ 55423875ec76f5e50c000002, 571366ba1174fb1755000005, 56c4d14ab04e159d0e000003,
199
+ 550c44d1a103b7801600000a, 5547a01cf35db75526000005, 55422640ccca0ce74b000004,
200
+ 54ecb66d445c3b5a5f000002, 553656c4bc4f83e828000009, 5172f8118ed59a060a000019,
201
+ 513711055274a5fb0700000e, 54d892ee014675820d000005, 52e6c92598d0239505000019,
202
+ 5353aedb288f4dae47000006, 52bf1f1303868f1b06000014, 5519113b622b19434500000f,
203
+ 52b2f1724003448f5500000b, 5525317687ecba3764000007, 554a0cadf35db7552600000f,
204
+ 55152bd246478f2f2c000002, 516c3960298dcd4e51000073, 571e417bbb137a4b0c00000a,
205
+ 551910d3622b194345000008, 54dc8ed6c0bb8dce23000002, 511a4ec01159fa8212000004,
206
+ 54d8ea2c4b1fd0d33c000002, 5148e1d6d24251bc0500003a, 515dbb3b298dcd4e51000018,
207
+ 56f7c15a09dd18d46b000012, 51475d5cd24251bc0500001b, 54db7c4ac0bb8dce23000001,
208
+ 57152ebbcb4ef8864c000002, 57134d511174fb1755000002, 55149f156a8cde6b72000013,
209
+ 56bcd422d36b5da378000005, 54ede5c394afd61504000006, 517545168ed59a060a00002b,
210
+ 5710ed19a5ed216440000003, 53442472aeec6fbd07000008, 55088e412e93f0133a000001,
211
+ 54d762653706e89528000014, 550aef0ec2af5d5b7000000a, 552435602c8b63434a000009,
212
+ 552446612c8b63434a00000c, 54d901ec4b1fd0d33c000006, 54cf45e7f693c3b16b00000a,
213
+ 52fc8b772059c6d71c00006e, 5314d05adae131f84700000d, 5512c91b6a8cde6b7200000b,
214
+ 56c5a7605795f9a73e000002, 55030a6ce9bde6963400000f, 553fac39c6a5098552000001,
215
+ 531a3a58b166e2b806000037, 5509bd6a1180f13250000002, 54f9c40ddd3fc62544000001,
216
+ 553c8fd1f32186855800000a, 56bce51cd36b5da37800000a, 550316a6e9bde69634000029,
217
+ 55031286e9bde6963400001b, 536e46f27d100faa09000012, 5502abd1e9bde69634000008,
218
+ 551af9106b348bb82c000002, 54edeb4394afd6150400000b, 5717cdd2070aa3d072000001,
219
+ 56c5ade15795f9a73e000003, 531464a6e3eabad021000014, 58a0d87a78275d0c4a000053,
220
+ 58a3160d60087bc10a00000a, 58a5d54860087bc10a000025, 58a0da5278275d0c4a000054,
221
+ 58a3264e60087bc10a00000d, 589c8ef878275d0c4a000042, 58a3428d60087bc10a00001b,
222
+ 58a3196360087bc10a00000b, 58a341eb60087bc10a000018, 58a3275960087bc10a00000f,
223
+ 58a342e760087bc10a00001c, 58bd645702b8c60953000010, 58bc8e5002b8c60953000006,
224
+ 58bc8e7a02b8c60953000007, 58a1da4e78275d0c4a000059, 58bcb83d02b8c6095300000f,
225
+ 58bc9a5002b8c60953000008, 589dee3778275d0c4a000050, 58a32efe60087bc10a000013,
226
+ 58a327bf60087bc10a000011, 58bca08702b8c6095300000a, 58bc9dbb02b8c60953000009,
227
+ 58c99fcc02b8c60953000029, 58bca2f302b8c6095300000c, 58cbf1f402b8c60953000036,
228
+ 58cdb41302b8c60953000042, 58cdb80302b8c60953000043, 58cdbaf302b8c60953000044,
229
+ 58cb305c02b8c60953000032, 58caf86f02b8c60953000030, 58c1b2f702b8c6095300001e,
230
+ 58bde18b02b8c60953000014, 58eb7898eda5a57672000006, 58caf88c02b8c60953000031,
231
+ 58e11bf76fddd3e83e00000c, 58cdbbd102b8c60953000045, 58df779d6fddd3e83e000001,
232
+ 58dbb4f08acda3452900001a, 58dbb8968acda3452900001b, 58add7699ef3c34033000009,
233
+ 58dbbbf08acda3452900001d, 58dbba438acda3452900001c, 58dd2cb08acda34529000029,
234
+ 58eb9542eda5a57672000007, 58f3ca5c70f9fc6f0f00000d, 58e9e7aa3e8b6dc87c00000d,
235
+ 58e3d9ab3e8b6dc87c000002, 58eb4ce7eda5a57672000004, 58f3c8f470f9fc6f0f00000c,
236
+ 58f3c62970f9fc6f0f00000b, 58adca6d9ef3c34033000007, 58f4b3ee70f9fc6f0f000013,
237
+ 593ff22b70f9fc6f0f000023, 5a679875b750ff4455000004, 5a774585faa1ab7d2e000005,
238
+ 5a6f7245b750ff4455000050, 5a787544faa1ab7d2e00000b, 5a74d9980384be9551000008,
239
+ 5a6a02a3b750ff4455000021, 5a6e47b1b750ff4455000049, 5a87124561bb38fb24000001,
240
+ 5a6e42f1b750ff4455000046, 5a8b1264fcd1d6a10c00001d, 5a981e66fcd1d6a10c00002f,
241
+ 5a8718c861bb38fb24000008, 5a7615af83b0d9ea6600001f, 5a87140a61bb38fb24000003,
242
+ 5a77072c9e632bc06600000a, 5a897601fcd1d6a10c000008, 5a871a6861bb38fb24000009,
243
+ 5a74e9ad0384be955100000a, 5a79d25dfaa1ab7d2e00000f, 5a6900ebb750ff445500001d,
244
+ 5a87145861bb38fb24000004, 5a871b8d61bb38fb2400000a, 5a897a06fcd1d6a10c00000b,
245
+ 5a8dc6b4fcd1d6a10c000026, 5a8712af61bb38fb24000002, 5a8714e261bb38fb24000005,
246
+ 5aa304f1d6d6b54f79000004, 5a981bcffcd1d6a10c00002d, 5aa3fa73d6d6b54f79000008,
247
+ 5aa55b45d6d6b54f7900000d, 5a981dd0fcd1d6a10c00002e, 5a9700adfcd1d6a10c00002c,
248
+ 5a9d8ffe1d1251d03b000022, 5a96c74cfcd1d6a10c000029, 5aa50086d6d6b54f7900000c,
249
+ 5a95765bfcd1d6a10c000028, 5a96f40cfcd1d6a10c00002b, 5ab144fefcf4565872000012,
250
+ 5aa67b4fd6d6b54f7900000f, 5abd5a62fcf4565872000031, 5abbe429fcf456587200001c,
251
+ 5aaef38dfcf456587200000f, 5abce6acfcf4565872000022, 5aae6499fcf456587200000c
252
+ """
253
+
254
+ _BIOASQ_6B_DESCRIPTION = """\
255
+ The data are intended to be used as training and development data for BioASQ 6,
256
+ which will take place during 2018. There is one file containing the data:
257
+ - BioASQ-trainingDataset6b.json
258
+
259
+ Differences with BioASQ-trainingDataset5b.json
260
+ - 500 new questions added from BioASQ5
261
+ - 48 pairs of questions with identical bodies have been merged into one
262
+ question having only one question-id, but all the documents, snippets,
263
+ concepts, RDF triples and answers of both questions of the pair.
264
+ - This normalization lead to the removal of 48 deprecated question
265
+ ids [2] from the dataset and to the update of the 48 remaining
266
+ questions [3].
267
+ - In cases where a pair of questions with identical bodies had some
268
+ inconsistency (e.g. different question type), the inconsistency has
269
+ been solved merging the pair manually consulting the BioASQ expert team.
270
+ - 12 questions were revised for various confusions that have been
271
+ identified
272
+ - In 8 questions the question type has been changed to better suit to
273
+ the question body. The change of type lead to corresponding changes
274
+ in exact answers existence and format : 54fc4e2e6ea36a810c000003,
275
+ 530b01a6970c65fa6b000008, 530cf54dab4de4de0c000009,
276
+ 531b2fc3b166e2b80600003c, 532819afd6d3ac6a3400000f,
277
+ 532aad53d6d3ac6a34000010, 5710ade4cf1c32585100002c,
278
+ 52f65f372059c6d71c000027
279
+ - In 6 questions the ideal answer has been revised :
280
+ 532aad53d6d3ac6a34000010, 5710ade4cf1c32585100002c,
281
+ 53147b52e3eabad021000015, 5147c8a6d24251bc05000027,
282
+ 5509bd6a1180f13250000002, 58bbb71f22d3005309000016
283
+ - In 5 questions the exact answer has been revised :
284
+ 5314bd7ddae131f847000006, 53130a77e3eabad02100000f,
285
+ 53148a07dae131f847000002, 53147b52e3eabad021000015,
286
+ 5147c8a6d24251bc05000027
287
+ - In 2 questions the question body has been revised :
288
+ 52f65f372059c6d71c000027, 5503145ee9bde69634000022
289
+ - In lists of ideal answers, documents, snippets, concepts and RDF triples
290
+ any duplicate identical elements have been removed.
291
+ - Ideal answers in format of one string have been converted to a list with
292
+ one element for consistency with cases where more than one golden ideal
293
+ answers are available. (i.e. "ideal_ans1" converted to ["ideal_ans1"])
294
+ - For yesno questions: All exact answers have been normalized to "yes" or
295
+ "no" (replacing "Yes", "YES" and "No")
296
+ - For factoid questions: The format of the exact answer was normalized to a
297
+ list of strings for each question, representing a set of synonyms
298
+ answering the question (i.e. [`ans1`, `syn11`, ... ]).
299
+ - For list questions: The format of the exact answer was normalized to a
300
+ list of lists. Each internal list represents one element of the answer
301
+ as a set of synonyms
302
+ (i.e. [[`ans1`, `syn11`, `syn12`], [`ans2`], [`ans3`, `syn31`] ...]).
303
+ - Empty elements, e.g. empty lists of documents have been removed.
304
+
305
+ [1] 2251 questions : 619 factoid, 616 yesno, 531 summary, 485 list
306
+ [2] The 48 deprecated question ids are : 52f8b2902059c6d71c000053,
307
+ 52f11bf22059c6d71c000005, 52f77edb2059c6d71c000028, 52ed795098d0239505000032,
308
+ 56d1a9baab2fed4a47000002, 52f7d3472059c6d71c00002f, 52fbe2bf2059c6d71c00006c,
309
+ 52ec961098d023950500002a, 52e8e98298d0239505000020, 56cae5125795f9a73e000024,
310
+ 530cefaaad0bf1360c000007, 530cefaaad0bf1360c000005, 52d63b2803868f1b0600003a,
311
+ 530cefaaad0bf1360c00000a, 516425ff298dcd4e51000051, 55191149622b194345000010,
312
+ 52fa70142059c6d71c000056, 52f77f4d2059c6d71c00002a, 52efc016c8da89891000001a,
313
+ 52efc001c8da898910000019, 52f896ae2059c6d71c000045, 52eceada98d023950500002d,
314
+ 52efc05cc8da89891000001c, 515e078e298dcd4e51000031, 52fe54252059c6d71c000079,
315
+ 514217a6d24251bc05000005, 52d1389303868f1b06000032, 530cf4d5e2bfff940c000003,
316
+ 52fc946d2059c6d71c000071, 52e8e99e98d0239505000021, 52ef7786c8da898910000015,
317
+ 52d8494698d0239505000007, 530cf51d5610acba0c000001, 52f637972059c6d71c000025,
318
+ 52e9f99798d0239505000025, 515de572298dcd4e51000021, 52fe4ad52059c6d71c000077,
319
+ 52f65bf02059c6d71c000026, 52e8e9d298d0239505000022, 52fa74052059c6d71c00005a,
320
+ 52ffbddf2059c6d71c00007d, 56bc932aac7ad1001900001c, 56c02883ef6e394741000017,
321
+ 52d2b75403868f1b06000035, 52f118aa2059c6d71c000003, 52e929eb98d0239505000023,
322
+ 532c12f2d6d3ac6a3400001d, 52d8466298d0239505000006'
323
+ [3] The 48 questions resulting from merging with their pair have the
324
+ following ids: 5149aafcd24251bc05000045, 515db020298dcd4e51000011,
325
+ 515db54c298dcd4e51000016, 51680a49298dcd4e51000062, 52b06a68f828ad283c000005,
326
+ 52bf1aa503868f1b06000006, 52bf1af803868f1b06000008, 52bf1d6003868f1b0600000e,
327
+ 52cb9b9b03868f1b0600002d, 52d2818403868f1b06000033, 52df887498d023950500000c,
328
+ 52e0c9a298d0239505000010, 52e203bc98d0239505000011, 52e62bae98d0239505000015,
329
+ 52e6c92598d0239505000019, 52e7bbf698d023950500001d, 52ea605098d0239505000028,
330
+ 52ece29f98d023950500002c, 52ecf2dd98d023950500002e, 52ef7754c8da898910000014,
331
+ 52f112bb2059c6d71c000002, 52f65f372059c6d71c000027, 52f77f752059c6d71c00002b,
332
+ 52f77f892059c6d71c00002c, 52f89ee42059c6d71c00004d, 52f89f4f2059c6d71c00004e,
333
+ 52f89fba2059c6d71c00004f, 52f89fc62059c6d71c000050, 52f89fd32059c6d71c000051,
334
+ 52fa6ac72059c6d71c000055, 52fa73c62059c6d71c000058, 52fa73e82059c6d71c000059,
335
+ 52fa74252059c6d71c00005b, 52fc8b772059c6d71c00006e, 52fc94572059c6d71c000070,
336
+ 52fc94ae2059c6d71c000073, 52fc94db2059c6d71c000074, 52fe52702059c6d71c000078,
337
+ 52fe58f82059c6d71c00007a, 530cefaaad0bf1360c000008, 530cefaaad0bf1360c000010,
338
+ 533ba218fd9a95ea0d000007, 534bb147aeec6fbd07000014, 55167dec46478f2f2c00000a,
339
+ 56c04412ef6e39474100001b, 56c1f01eef6e394741000046, 56c81fd15795f9a73e00000c,
340
+ 587d016ed673c3eb14000002
341
+ """
342
+
343
+ _BIOASQ_5B_DESCRIPTION = """\
344
+ The data are intended to be used as training and development data for BioASQ 5,
345
+ which will take place during 2017. There is one file containing the data:
346
+ - BioASQ-trainingDataset5b.json
347
+
348
+ The file contains the data of the first four editions of the challenge: 1799
349
+ questions with their relevant documents, snippets, concepts and rdf triples,
350
+ exact and ideal answers.
351
+ """
352
+
353
+ _BIOASQ_4B_DESCRIPTION = """\
354
+ The data are intended to be used as training and development data for BioASQ 4,
355
+ which will take place during 2016. There is one file containing the data:
356
+ - BioASQ-trainingDataset4b.json
357
+
358
+ The file contains the data of the first three editions of the challenge: 1307
359
+ questions with their relevant documents, snippets, concepts and rdf triples,
360
+ exact and ideal answers from the first two editions and 497 questions with
361
+ similar annotations from the third editions of the challenge.
362
+ """
363
+
364
+ _BIOASQ_3B_DESCRIPTION = """No README provided."""
365
+
366
+ _BIOASQ_2B_DESCRIPTION = """No README provided."""
367
+
368
+ _BIOASQ_BLURB_DESCRIPTION = """The BioASQ corpus contains multiple question
369
+ answering tasks annotated by biomedical experts, including yes/no, factoid, list,
370
+ and summary questions. Pertaining to our objective of comparing neural language
371
+ models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
372
+ of other tasks to future work. Each question is paired with a reference text
373
+ containing multiple sentences from a PubMed abstract and a yes/no answer. We use
374
+ the official train/dev/test split of 670/75/140 questions.
375
+
376
+ See 'Domain-Specific Language Model Pretraining for Biomedical
377
+ Natural Language Processing' """
378
+
379
+ _DESCRIPTION = {
380
+ "bioasq_11b": _BIOASQ_11B_DESCRIPTION,
381
+ "bioasq_10b": _BIOASQ_10B_DESCRIPTION,
382
+ "bioasq_9b": _BIOASQ_9B_DESCRIPTION,
383
+ "bioasq_8b": _BIOASQ_8B_DESCRIPTION,
384
+ "bioasq_7b": _BIOASQ_7B_DESCRIPTION,
385
+ "bioasq_6b": _BIOASQ_6B_DESCRIPTION,
386
+ "bioasq_5b": _BIOASQ_5B_DESCRIPTION,
387
+ "bioasq_4b": _BIOASQ_4B_DESCRIPTION,
388
+ "bioasq_3b": _BIOASQ_3B_DESCRIPTION,
389
+ "bioasq_2b": _BIOASQ_2B_DESCRIPTION,
390
+ "bioasq_blurb": _BIOASQ_BLURB_DESCRIPTION,
391
+ }
392
+
393
+ _HOMEPAGE = "http://participants-area.bioasq.org/datasets/"
394
+
395
+ # Data access reqires registering with BioASQ.
396
+ # See http://participants-area.bioasq.org/accounts/register/
397
+ _LICENSE = "NLM_LICENSE"
398
+
399
+ _URLs = {
400
+ "bioasq_11b": ["BioASQ-training11b.zip", "Task11BGoldenEnriched.zip"],
401
+ "bioasq_10b": ["BioASQ-training10b.zip", "Task10BGoldenEnriched.zip"],
402
+ "bioasq_9b": ["BioASQ-training9b.zip", "Task9BGoldenEnriched.zip"],
403
+ "bioasq_8b": ["BioASQ-training8b.zip", "Task8BGoldenEnriched.zip"],
404
+ "bioasq_7b": ["BioASQ-training7b.zip", "Task7BGoldenEnriched.zip"],
405
+ "bioasq_6b": ["BioASQ-training6b.zip", "Task6BGoldenEnriched.zip"],
406
+ "bioasq_5b": ["BioASQ-training5b.zip", "Task5BGoldenEnriched.zip"],
407
+ "bioasq_4b": ["BioASQ-training4b.zip", "Task4BGoldenEnriched.zip"],
408
+ "bioasq_3b": ["BioASQ-trainingDataset3b.zip", "Task3BGoldenEnriched.zip"],
409
+ "bioasq_2b": ["BioASQ-trainingDataset2b.zip", "Task2BGoldenEnriched.zip"],
410
+ "bioasq_blurb": ["BioASQ-training7b.zip", "Task7BGoldenEnriched.zip"],
411
+ }
412
+
413
+ # BLURB train and dev contain all yesno questions from the offical training split
414
+ # test is all yesno question from the official test split
415
+ _BLURB_SPLITS = {
416
+ "dev": {
417
+ "5313b049e3eabad021000013",
418
+ "553a8d78f321868558000003",
419
+ "5158a5b8d24251bc05000097",
420
+ "571e3d42bb137a4b0c000007",
421
+ "5175b97a8ed59a060a00002f",
422
+ "56c9e9d15795f9a73e00001d",
423
+ "56d19ffaab2fed4a47000001",
424
+ "518ccac0310faafe0800000b",
425
+ "56f12ca92ac5ed145900000e",
426
+ "51680a49298dcd4e51000062",
427
+ "5339ed7bd6d3ac6a34000060",
428
+ "516e5f33298dcd4e5100007e",
429
+ "5327139ad6d3ac6a3400000d",
430
+ "54e12ae3ae9738404b000004",
431
+ "5321b8579b2d7acc7e000008",
432
+ "514a4679d24251bc0500005b",
433
+ "54c12fd1f693c3b16b000001",
434
+ "52df887498d023950500000c",
435
+ "52f20d802059c6d71c00000a",
436
+ "532f0c4ed6d3ac6a3400002e",
437
+ "52b2f3b74003448f5500000c",
438
+ "52b2f1724003448f5500000b",
439
+ "515d9a42298dcd4e5100000d",
440
+ "5159b990d24251bc050000a3",
441
+ "54e12c30ae9738404b000005",
442
+ "553a6a9fbc4f83e82800001c",
443
+ "5509ec41c2af5d5b70000006",
444
+ "56cae40b5795f9a73e000022",
445
+ "51680b0e298dcd4e51000065",
446
+ "515df89e298dcd4e5100002f",
447
+ "54f49e56d0d681a040000004",
448
+ "571e3e2abb137a4b0c000008",
449
+ "515debe7298dcd4e51000026",
450
+ "56f6ab7009dd18d46b00000d",
451
+ "53302bced6d3ac6a34000039",
452
+ "5322de919b2d7acc7e000012",
453
+ "5709f212cf1c325851000020",
454
+ "5502abd1e9bde69634000008",
455
+ "516c220e298dcd4e51000071",
456
+ "5894597e7d9090f353000004",
457
+ "5895ec5e7d9090f353000015",
458
+ "58bbb8ae22d3005309000018",
459
+ "58bc58c302b8c60953000001",
460
+ "58c276bc02b8c60953000020",
461
+ "58c0825502b8c6095300001b",
462
+ "58ab1f6c9ef3c34033000002",
463
+ "58adbe999ef3c34033000005",
464
+ "58df3e408acda3452900002d",
465
+ "58dfec676fddd3e83e000006",
466
+ "58d8d0cc8acda34529000008",
467
+ "58b67fae22d3005309000009",
468
+ "58dbbbf08acda3452900001d",
469
+ "58dbba438acda3452900001c",
470
+ "58dbbdac8acda3452900001e",
471
+ "58dcbb8c8acda34529000021",
472
+ "5a468785966455904c00000d",
473
+ "5a70de5199e2c3af26000005",
474
+ "5a67a550b750ff4455000009",
475
+ "5a679875b750ff4455000004",
476
+ "5a7a44b4faa1ab7d2e000010",
477
+ "5a67ade5b750ff445500000c",
478
+ "5a8881118cb19eca6b000006",
479
+ "5a67b48cb750ff4455000010",
480
+ "5a679be1b750ff4455000005",
481
+ "5a7340962dc08e987e000017",
482
+ "5a737e233b9d13c70800000d",
483
+ "5a8dc57ffcd1d6a10c000025",
484
+ "5a6d186db750ff4455000031",
485
+ "5a70d43b99e2c3af26000003",
486
+ "5a70ec6899e2c3af2600000c",
487
+ "5a9ac4161d1251d03b000010",
488
+ "5a733d2a2dc08e987e000015",
489
+ "5a74acd80384be9551000006",
490
+ "5aa6800ad6d6b54f79000011",
491
+ "5a9d9ab94e03427e73000003",
492
+ }
493
+ }
494
+
495
+ _SUPPORTED_TASKS = [Tasks.QUESTION_ANSWERING]
496
+ _SOURCE_VERSION = "1.0.0"
497
+ _BIGBIO_VERSION = "1.0.0"
498
+
499
+
500
+ class BioasqTaskBDataset(datasets.GeneratorBasedBuilder):
501
+ """
502
+ BioASQ Task B On Biomedical Semantic QA.
503
+ Creates configs for BioASQ2 through BioASQ10.
504
+ """
505
+
506
+ DEFAULT_CONFIG_NAME = "bioasq_9b_source"
507
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
508
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
509
+
510
+ # BioASQ2 through BioASQ11
511
+ BUILDER_CONFIGS = []
512
+ for version in range(2, 12):
513
+ BUILDER_CONFIGS.append(
514
+ BigBioConfig(
515
+ name=f"bioasq_{version}b_source",
516
+ version=SOURCE_VERSION,
517
+ description=f"bioasq{version} Task B source schema",
518
+ schema="source",
519
+ subset_id=f"bioasq_{version}b",
520
+ )
521
+ )
522
+
523
+ BUILDER_CONFIGS.append(
524
+ BigBioConfig(
525
+ name=f"bioasq_{version}b_bigbio_qa",
526
+ version=BIGBIO_VERSION,
527
+ description=f"bioasq{version} Task B in simplified BigBio schema",
528
+ schema="bigbio_qa",
529
+ subset_id=f"bioasq_{version}b",
530
+ )
531
+ )
532
+
533
+ # BLURB Benchmark config https://microsoft.github.io/BLURB/
534
+ BUILDER_CONFIGS.append(
535
+ BigBioConfig(
536
+ name=f"bioasq_blurb_bigbio_qa",
537
+ version=BIGBIO_VERSION,
538
+ description=f"BLURB benchmark in simplified BigBio schema",
539
+ schema="bigbio_qa",
540
+ subset_id=f"bioasq_blurb",
541
+ )
542
+ )
543
+
544
+ def _info(self):
545
+
546
+ # BioASQ Task B source schema
547
+ if self.config.schema == "source":
548
+ features = datasets.Features(
549
+ {
550
+ "id": datasets.Value("string"),
551
+ "type": datasets.Value("string"),
552
+ "body": datasets.Value("string"),
553
+ "documents": datasets.Sequence(datasets.Value("string")),
554
+ "concepts": datasets.Sequence(datasets.Value("string")),
555
+ "ideal_answer": datasets.Sequence(datasets.Value("string")),
556
+ "exact_answer": datasets.Sequence(datasets.Value("string")),
557
+ "triples": [
558
+ {
559
+ "p": datasets.Value("string"),
560
+ "s": datasets.Value("string"),
561
+ "o": datasets.Value("string"),
562
+ }
563
+ ],
564
+ "snippets": [
565
+ {
566
+ "offsetInBeginSection": datasets.Value("int32"),
567
+ "offsetInEndSection": datasets.Value("int32"),
568
+ "text": datasets.Value("string"),
569
+ "beginSection": datasets.Value("string"),
570
+ "endSection": datasets.Value("string"),
571
+ "document": datasets.Value("string"),
572
+ }
573
+ ],
574
+ }
575
+ )
576
+ # simplified schema for QA tasks
577
+ elif self.config.schema == "bigbio_qa":
578
+ features = qa_features
579
+
580
+ return datasets.DatasetInfo(
581
+ description=_DESCRIPTION[self.config.subset_id],
582
+ features=features,
583
+ supervised_keys=None,
584
+ homepage=_HOMEPAGE,
585
+ license=str(_LICENSE),
586
+ citation=_CITATION,
587
+ )
588
+
589
+ def _dump_gold_json(self, data_dir):
590
+ """
591
+ BioASQ test data is split into multiple records {9B1_golden.json,...,9B5_golden.json}
592
+ We combine these files into a single test set file 9Bx_golden.json
593
+ """
594
+ # BLURB is based on version 7
595
+ version = (
596
+ re.search(r"bioasq_([0-9]+)b", self.config.subset_id).group(1) if "blurb" not in self.config.name else "7"
597
+ )
598
+ gold_fpath = os.path.join(data_dir, f"Task{version}BGoldenEnriched/bx_golden.json")
599
+
600
+ if not os.path.exists(gold_fpath):
601
+ # combine all gold json files
602
+ filelist = glob.glob(os.path.join(data_dir, "*/*.json"))
603
+ data = {"questions": []}
604
+ for fname in sorted(filelist):
605
+ with open(fname, "rt", encoding="utf-8") as file:
606
+ data["questions"].extend(json.load(file)["questions"])
607
+ # dump gold to json
608
+ with open(gold_fpath, "wt", encoding="utf-8") as file:
609
+ json.dump(data, file, indent=2)
610
+
611
+ return f"Task{version}BGoldenEnriched/bx_golden.json"
612
+
613
+ def _blurb_split_generator(self, train_dir, test_dir):
614
+ """
615
+ Create splits for BLURB Benchmark
616
+ """
617
+ gold_fpath = self._dump_gold_json(test_dir)
618
+
619
+ # create train/dev splits from yesno questions
620
+ train_fpath = os.path.join(train_dir, "blurb_bioasq_train.json")
621
+ dev_fpath = os.path.join(train_dir, "blurb_bioasq_dev.json")
622
+
623
+ blurb_splits = {
624
+ "train": {"questions": []},
625
+ "dev": {"questions": []},
626
+ "test": {"questions": []},
627
+ }
628
+
629
+ if not os.path.exists(train_fpath):
630
+ data_fpath = os.path.join(train_dir, "BioASQ-training7b/trainining7b.json")
631
+ with open(data_fpath, "rt", encoding="utf-8") as file:
632
+ data = json.load(file)
633
+
634
+ for record in data["questions"]:
635
+ if record["type"] != "yesno":
636
+ continue
637
+ if record["id"] in _BLURB_SPLITS["dev"]:
638
+ blurb_splits["dev"]["questions"].append(record)
639
+ else:
640
+ blurb_splits["train"]["questions"].append(record)
641
+
642
+ with open(train_fpath, "wt", encoding="utf-8") as file:
643
+ json.dump(blurb_splits["train"], file, indent=2)
644
+
645
+ with open(dev_fpath, "wt", encoding="utf-8") as file:
646
+ json.dump(blurb_splits["dev"], file, indent=2)
647
+
648
+ # create test split from yesno questions
649
+ with open(os.path.join(test_dir, gold_fpath), "rt", encoding="utf-8") as file:
650
+ data = json.load(file)
651
+
652
+ for record in data["questions"]:
653
+ if record["type"] != "yesno":
654
+ continue
655
+ blurb_splits["test"]["questions"].append(record)
656
+
657
+ test_fpath = os.path.join(test_dir, "blurb_bioasq_test.json")
658
+ with open(test_fpath, "wt", encoding="utf-8") as file:
659
+ json.dump(blurb_splits["test"], file, indent=2)
660
+
661
+ return [
662
+ datasets.SplitGenerator(
663
+ name=datasets.Split.TRAIN,
664
+ gen_kwargs={
665
+ "filepath": train_fpath,
666
+ "split": "train",
667
+ },
668
+ ),
669
+ datasets.SplitGenerator(
670
+ name=datasets.Split.VALIDATION,
671
+ gen_kwargs={
672
+ "filepath": dev_fpath,
673
+ "split": "dev",
674
+ },
675
+ ),
676
+ datasets.SplitGenerator(
677
+ name=datasets.Split.TEST,
678
+ gen_kwargs={
679
+ "filepath": test_fpath,
680
+ "split": "test",
681
+ },
682
+ ),
683
+ ]
684
+
685
+ def _split_generators(self, dl_manager):
686
+ """Returns SplitGenerators."""
687
+
688
+ if self.config.data_dir is None:
689
+ raise ValueError("This is a local dataset. Please pass the data_dir kwarg to load_dataset.")
690
+
691
+ train_dir, test_dir = dl_manager.download_and_extract(
692
+ [os.path.join(self.config.data_dir, _url) for _url in _URLs[self.config.subset_id]]
693
+ )
694
+ # create gold dump and get path
695
+ gold_fpath = self._dump_gold_json(test_dir)
696
+
697
+ # older versions of bioasq have different folder formats
698
+ train_fpaths = {
699
+ "bioasq_2b": "BioASQ_2013_TaskB/BioASQ-trainingDataset2b.json",
700
+ "bioasq_3b": "BioASQ-trainingDataset3b.json",
701
+ "bioasq_4b": "BioASQ-training4b/BioASQ-trainingDataset4b.json",
702
+ "bioasq_5b": "BioASQ-training5b/BioASQ-trainingDataset5b.json",
703
+ "bioasq_6b": "BioASQ-training6b/BioASQ-trainingDataset6b.json",
704
+ "bioasq_7b": "BioASQ-training7b/trainining7b.json",
705
+ "bioasq_8b": "training8b.json", # HACK - this zipfile strips the dirname
706
+ "bioasq_9b": "BioASQ-training9b/training9b.json",
707
+ "bioasq_10b": "training10b.json",
708
+ "bioasq_11b": "BioASQ-training11b/training11b.json",
709
+ }
710
+
711
+ # BLURB has custom train/dev/test splits based on Task 7B
712
+ if "blurb" in self.config.name:
713
+ return self._blurb_split_generator(train_dir, test_dir)
714
+
715
+ return [
716
+ datasets.SplitGenerator(
717
+ name=datasets.Split.TRAIN,
718
+ gen_kwargs={
719
+ "filepath": os.path.join(train_dir, train_fpaths[self.config.subset_id]),
720
+ "split": "train",
721
+ },
722
+ ),
723
+ datasets.SplitGenerator(
724
+ name=datasets.Split.TEST,
725
+ gen_kwargs={
726
+ "filepath": os.path.join(test_dir, gold_fpath),
727
+ "split": "test",
728
+ },
729
+ ),
730
+ ]
731
+
732
+ def _get_exact_answer(self, record):
733
+ """The value exact_answer can be in different formats based on question type."""
734
+ if record["type"] == "yesno":
735
+ exact_answer = [record["exact_answer"]]
736
+ elif record["type"] == "summary":
737
+ exact_answer = []
738
+ # summary question types only have an ideal answer, so use that for bigbio
739
+ if self.config.schema == "bigbio_qa":
740
+ exact_answer = (
741
+ record["ideal_answer"] if isinstance(record["ideal_answer"], list) else [record["ideal_answer"]]
742
+ )
743
+
744
+ elif record["type"] == "list":
745
+ exact_answer = record["exact_answer"]
746
+ elif record["type"] == "factoid":
747
+ # older version of bioasq sometimes represent this as as string
748
+ exact_answer = (
749
+ record["exact_answer"] if isinstance(record["exact_answer"], list) else [record["exact_answer"]]
750
+ )
751
+ return exact_answer
752
+
753
+ @staticmethod
754
+ def _normalize_yesno(yesno):
755
+ assert len(yesno) == 1, "There should be only one answer."
756
+ yesno = yesno[0]
757
+ # normalize answers like "Yes."
758
+ yesno = yesno.lower()
759
+ if yesno.startswith("yes"):
760
+ return ["yes"]
761
+ elif yesno.startswith("no"):
762
+ return ["no"]
763
+ else:
764
+ raise ValueError(f"Unrecognized yesno value: {yesno}")
765
+
766
+ def _generate_examples(self, filepath, split):
767
+ """Yields examples as (key, example) tuples."""
768
+
769
+ if self.config.schema == "source":
770
+ with open(filepath, encoding="utf-8") as file:
771
+ data = json.load(file)
772
+ for i, record in enumerate(data["questions"]):
773
+ yield i, {
774
+ "id": record["id"],
775
+ "type": record["type"],
776
+ "body": record["body"],
777
+ "documents": record["documents"],
778
+ "concepts": record["concepts"] if "concepts" in record else [],
779
+ "triples": record["triples"] if "triples" in record else [],
780
+ "ideal_answer": record["ideal_answer"]
781
+ if isinstance(record["ideal_answer"], list)
782
+ else [record["ideal_answer"]],
783
+ "exact_answer": self._get_exact_answer(record),
784
+ "snippets": record["snippets"] if "snippets" in record else [],
785
+ }
786
+
787
+ elif self.config.schema == "bigbio_qa":
788
+ # NOTE: Years 2014-2016 (BioASQ2-BioASQ4) have duplicate records
789
+ cache = set()
790
+ with open(filepath, encoding="utf-8") as file:
791
+ uid = 0
792
+ data = json.load(file)
793
+ for record in data["questions"]:
794
+ # for questions that do not have snippets, skip
795
+ if "snippets" not in record:
796
+ continue
797
+
798
+ choices = []
799
+ answer = self._get_exact_answer(record)
800
+ if record["type"] == "yesno":
801
+ choices = ["yes", "no"]
802
+ answer = self._normalize_yesno(answer)
803
+
804
+ for i, snippet in enumerate(record["snippets"]):
805
+ key = f'{record["id"]}_{i}'
806
+ # ignore duplicate records
807
+ if key not in cache:
808
+ cache.add(key)
809
+ yield uid, {
810
+ "id": key,
811
+ "document_id": snippet["document"],
812
+ "question_id": record["id"],
813
+ "question": record["body"],
814
+ "type": record["type"],
815
+ "choices": choices,
816
+ "context": snippet["text"],
817
+ "answer": answer,
818
+ }
819
+ uid += 1
bioasq_task_b.py CHANGED
@@ -61,6 +61,24 @@ _CITATION = """\
61
  _DATASETNAME = "bioasq_task_b"
62
  _DISPLAYNAME = "BioASQ Task B"
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  _BIOASQ_10B_DESCRIPTION = """\
65
  The data are intended to be used as training and development data for BioASQ
66
  10, which will take place during 2022. There is one file containing the data:
@@ -361,6 +379,7 @@ See 'Domain-Specific Language Model Pretraining for Biomedical
361
  Natural Language Processing' """
362
 
363
  _DESCRIPTION = {
 
364
  "bioasq_10b": _BIOASQ_10B_DESCRIPTION,
365
  "bioasq_9b": _BIOASQ_9B_DESCRIPTION,
366
  "bioasq_8b": _BIOASQ_8B_DESCRIPTION,
@@ -380,6 +399,7 @@ _HOMEPAGE = "http://participants-area.bioasq.org/datasets/"
380
  _LICENSE = "NLM_LICENSE"
381
 
382
  _URLs = {
 
383
  "bioasq_10b": ["BioASQ-training10b.zip", "Task10BGoldenEnriched.zip"],
384
  "bioasq_9b": ["BioASQ-training9b.zip", "Task9BGoldenEnriched.zip"],
385
  "bioasq_8b": ["BioASQ-training8b.zip", "Task8BGoldenEnriched.zip"],
@@ -489,9 +509,9 @@ class BioasqTaskBDataset(datasets.GeneratorBasedBuilder):
489
  SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
490
  BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
491
 
492
- # BioASQ2 through BioASQ10
493
  BUILDER_CONFIGS = []
494
- for version in range(2, 11):
495
  BUILDER_CONFIGS.append(
496
  BigBioConfig(
497
  name=f"bioasq_{version}b_source",
@@ -695,7 +715,8 @@ class BioasqTaskBDataset(datasets.GeneratorBasedBuilder):
695
  "bioasq_7b": "BioASQ-training7b/trainining7b.json",
696
  "bioasq_8b": "training8b.json", # HACK - this zipfile strips the dirname
697
  "bioasq_9b": "BioASQ-training9b/training9b.json",
698
- "bioasq_10b": "BioASQ-training10b/training10b.json",
 
699
  }
700
 
701
  # BLURB has custom train/dev/test splits based on Task 7B
@@ -746,6 +767,19 @@ class BioasqTaskBDataset(datasets.GeneratorBasedBuilder):
746
  )
747
  return exact_answer
748
 
 
 
 
 
 
 
 
 
 
 
 
 
 
749
  def _generate_examples(self, filepath, split):
750
  """Yields examples as (key, example) tuples."""
751
 
@@ -777,6 +811,13 @@ class BioasqTaskBDataset(datasets.GeneratorBasedBuilder):
777
  # for questions that do not have snippets, skip
778
  if "snippets" not in record:
779
  continue
 
 
 
 
 
 
 
780
  for i, snippet in enumerate(record["snippets"]):
781
  key = f'{record["id"]}_{i}'
782
  # ignore duplicate records
@@ -788,8 +829,8 @@ class BioasqTaskBDataset(datasets.GeneratorBasedBuilder):
788
  "question_id": record["id"],
789
  "question": record["body"],
790
  "type": record["type"],
791
- "choices": [],
792
  "context": snippet["text"],
793
- "answer": self._get_exact_answer(record),
794
  }
795
  uid += 1
 
61
  _DATASETNAME = "bioasq_task_b"
62
  _DISPLAYNAME = "BioASQ Task B"
63
 
64
+ _BIOASQ_11B_DESCRIPTION = """\
65
+ The data are intended to be used as training and development data for BioASQ
66
+ 11, which will take place during 2023. There is one file containing the data:
67
+ - training11b.json
68
+
69
+ The file contains the data of the first ten editions of the challenge: 4719
70
+ questions [1] with their relevant documents, snippets, concepts and RDF
71
+ triples, exact and ideal answers.
72
+
73
+ Differences with BioASQ-training10b.json
74
+ - 485 new questions added from BioASQ10
75
+ - The question with id 621ecf1a3a8413c653000061 had identical body with
76
+ 5ac0a36f19833b0d7b000002. All relevant elements from both questions
77
+ are available in the merged question with id 5ac0a36f19833b0d7b000002.
78
+
79
+ [1] The distribution of 4719 questions : 1417 factoid, 1271 yesno, 1130 summary, 901 list
80
+ """
81
+
82
  _BIOASQ_10B_DESCRIPTION = """\
83
  The data are intended to be used as training and development data for BioASQ
84
  10, which will take place during 2022. There is one file containing the data:
 
379
  Natural Language Processing' """
380
 
381
  _DESCRIPTION = {
382
+ "bioasq_11b": _BIOASQ_11B_DESCRIPTION,
383
  "bioasq_10b": _BIOASQ_10B_DESCRIPTION,
384
  "bioasq_9b": _BIOASQ_9B_DESCRIPTION,
385
  "bioasq_8b": _BIOASQ_8B_DESCRIPTION,
 
399
  _LICENSE = "NLM_LICENSE"
400
 
401
  _URLs = {
402
+ "bioasq_11b": ["BioASQ-training11b.zip", "Task11BGoldenEnriched.zip"],
403
  "bioasq_10b": ["BioASQ-training10b.zip", "Task10BGoldenEnriched.zip"],
404
  "bioasq_9b": ["BioASQ-training9b.zip", "Task9BGoldenEnriched.zip"],
405
  "bioasq_8b": ["BioASQ-training8b.zip", "Task8BGoldenEnriched.zip"],
 
509
  SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
510
  BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
511
 
512
+ # BioASQ2 through BioASQ11
513
  BUILDER_CONFIGS = []
514
+ for version in range(2, 12):
515
  BUILDER_CONFIGS.append(
516
  BigBioConfig(
517
  name=f"bioasq_{version}b_source",
 
715
  "bioasq_7b": "BioASQ-training7b/trainining7b.json",
716
  "bioasq_8b": "training8b.json", # HACK - this zipfile strips the dirname
717
  "bioasq_9b": "BioASQ-training9b/training9b.json",
718
+ "bioasq_10b": "training10b.json",
719
+ "bioasq_11b": "BioASQ-training11b/training11b.json",
720
  }
721
 
722
  # BLURB has custom train/dev/test splits based on Task 7B
 
767
  )
768
  return exact_answer
769
 
770
+ @staticmethod
771
+ def _normalize_yesno(yesno):
772
+ assert len(yesno) == 1, "There should be only one answer."
773
+ yesno = yesno[0]
774
+ # normalize answers like "Yes."
775
+ yesno = yesno.lower()
776
+ if yesno.startswith('yes'):
777
+ return ['yes']
778
+ elif yesno.startswith('no'):
779
+ return ['no']
780
+ else:
781
+ raise ValueError(f'Unrecognized yesno value: {yesno}')
782
+
783
  def _generate_examples(self, filepath, split):
784
  """Yields examples as (key, example) tuples."""
785
 
 
811
  # for questions that do not have snippets, skip
812
  if "snippets" not in record:
813
  continue
814
+
815
+ choices = []
816
+ answer = self._get_exact_answer(record)
817
+ if record["type"] == 'yesno':
818
+ choices = ['yes', 'no']
819
+ answer = self._normalize_yesno(answer)
820
+
821
  for i, snippet in enumerate(record["snippets"]):
822
  key = f'{record["id"]}_{i}'
823
  # ignore duplicate records
 
829
  "question_id": record["id"],
830
  "question": record["body"],
831
  "type": record["type"],
832
+ "choices": choices,
833
  "context": snippet["text"],
834
+ "answer": answer,
835
  }
836
  uid += 1