piotr-rybak commited on
Commit
cec5231
1 Parent(s): 456188b

add readme

Browse files
Files changed (2) hide show
  1. .gitattributes +1 -0
  2. README.md +138 -27
.gitattributes CHANGED
@@ -56,3 +56,4 @@ polqa_v1.0.csv filter=lfs diff=lfs merge=lfs -text
56
  data/valid.csv filter=lfs diff=lfs merge=lfs -text
57
  data/test.csv filter=lfs diff=lfs merge=lfs -text
58
  data/train.csv filter=lfs diff=lfs merge=lfs -text
 
 
56
  data/valid.csv filter=lfs diff=lfs merge=lfs -text
57
  data/test.csv filter=lfs diff=lfs merge=lfs -text
58
  data/train.csv filter=lfs diff=lfs merge=lfs -text
59
+ data/passages.jsonl filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -2,98 +2,204 @@
2
  task_categories:
3
  - question-answering
4
  - text-retrieval
 
 
 
 
 
5
  language:
6
  - pl
7
  pretty_name: PolQA
8
  size_categories:
9
  - 10K<n<100K
 
 
10
  ---
11
 
12
- # Dataset Card for Dataset Name
13
 
14
  ## Dataset Description
15
 
16
- - **Homepage:**
17
- - **Repository:**
18
- - **Paper:**
19
- - **Leaderboard:**
20
- - **Point of Contact:**
21
 
22
  ### Dataset Summary
23
 
24
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
25
 
26
  ### Supported Tasks and Leaderboards
27
 
28
- [More Information Needed]
 
 
29
 
30
  ### Languages
31
 
32
- [More Information Needed]
33
 
34
  ## Dataset Structure
35
 
36
  ### Data Instances
37
 
38
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
  ### Data Fields
41
 
42
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ### Data Splits
45
 
46
- [More Information Needed]
 
 
 
 
 
 
 
47
 
48
  ## Dataset Creation
49
 
50
  ### Curation Rationale
51
 
52
- [More Information Needed]
53
 
54
  ### Source Data
55
 
56
  #### Initial Data Collection and Normalization
57
 
58
- [More Information Needed]
 
 
 
 
59
 
60
  #### Who are the source language producers?
61
 
62
- [More Information Needed]
 
 
63
 
64
  ### Annotations
65
 
66
  #### Annotation process
67
 
68
- [More Information Needed]
 
 
 
 
 
 
69
 
70
  #### Who are the annotators?
71
 
72
- [More Information Needed]
 
 
73
 
74
  ### Personal and Sensitive Information
75
 
76
- [More Information Needed]
77
 
78
  ## Considerations for Using the Data
79
 
80
  ### Social Impact of Dataset
81
 
82
- [More Information Needed]
83
 
84
  ### Discussion of Biases
85
 
86
- [More Information Needed]
87
 
88
  ### Other Known Limitations
89
 
90
- [More Information Needed]
91
 
92
  ## Additional Information
93
 
94
  ### Dataset Curators
95
 
96
- [More Information Needed]
97
 
98
  ### Licensing Information
99
 
@@ -101,8 +207,13 @@ This dataset card aims to be a base template for new datasets. It has been gener
101
 
102
  ### Citation Information
103
 
104
- [More Information Needed]
105
-
106
- ### Contributions
107
-
108
- [More Information Needed]
 
 
 
 
 
 
2
  task_categories:
3
  - question-answering
4
  - text-retrieval
5
+ - text2text-generation
6
+ task_ids:
7
+ - open-domain-qa
8
+ - document-retrieval
9
+ - abstractive-qa
10
  language:
11
  - pl
12
  pretty_name: PolQA
13
  size_categories:
14
  - 10K<n<100K
15
+ annotations_creators:
16
+ - expert-generated
17
  ---
18
 
19
+ # Dataset Card for PolQA Dataset
20
 
21
  ## Dataset Description
22
 
23
+ - **Paper:** [Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies](https://arxiv.org/abs/2212.08897)
24
+ - **Point of Contact:** [Piotr Rybak](mailto:[email protected])
25
+
 
 
26
 
27
  ### Dataset Summary
28
 
29
+ PolQA is the first Polish dataset for open-domain question answering. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7 million candidate passages. The dataset can be used to train both a passage retriever and an abstractive reader.
30
 
31
  ### Supported Tasks and Leaderboards
32
 
33
+ - `open-domain-qa`: The dataset can be used to train a model for open-domain question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
34
+ - `document-retrieval`: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by [top-k retrieval accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.top_k_accuracy_score.html) or [NDCG](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.ndcg_score.html).
35
+ - `abstractive-qa`: The dataset can be used to train a model for abstractive question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
36
 
37
  ### Languages
38
 
39
+ The text is in Polish, as spoken by the host of the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show (questions) and [Polish Wikipedia](https://pl.wikipedia.org/) editors (passages). The BCP-47 code for Polish is pl-PL.
40
 
41
  ## Dataset Structure
42
 
43
  ### Data Instances
44
 
45
+ The main part of the dataset consists of manually annotated question-passage pairs. For each instance, there is a `question`, a passage (`passage_id`, `passage_title`, `passage_text`), and a boolean indicator if the passage is `relevant` for the given question (i.e. does it contain the answers).
46
+
47
+ For each `question` there is a list of possible `answers` formulated in a natural language, in a way a Polish
48
+ speaker would answer the questions. It means that the answers might
49
+ contain prepositions, be inflected, and contain punctuation. In some
50
+ cases, the answer might have multiple correct variants, e.g. numbers
51
+ are written as numerals and words, synonyms, abbreviations and their
52
+ expansions.
53
+
54
+ Additionally, we provide a classification of each question-answer pair based on the `question_formulation`, the `question_type`, and the `entity_type/entity_subtype`, according to the taxonomy proposed by
55
+ [Maciej Ogrodniczuk and Piotr Przybyła (2021)](http://nlp.ipipan.waw.pl/Bib/ogr:prz:21:poleval.pdf).
56
+
57
+ ```
58
+ {
59
+ 'question_id': 6,
60
+ 'passage_title': 'Mumbaj',
61
+ 'passage_text': 'Mumbaj lub Bombaj (marathi मुंबई, trb.: Mumbaj; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim.',
62
+ 'passage_wiki': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.',
63
+ 'passage_id': '42609-0',
64
+ 'duplicate': False,
65
+ 'question': 'W którym państwie leży Bombaj?',
66
+ 'relevant': True,
67
+ 'annotated_by': 'Igor',
68
+ 'answers': "['w Indiach', 'Indie']",
69
+ 'question_formulation': 'QUESTION',
70
+ 'question_type': 'SINGLE ENTITY',
71
+ 'entity_type': 'NAMED',
72
+ 'entity_subtype': 'COUNTRY',
73
+ 'split': 'train',
74
+ 'passage_source': 'human'
75
+ }
76
+ ```
77
+
78
+ The second part of the dataset is a corpus of Polish Wikipedia (March 2022 snapshot) passages. The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
79
+
80
+ ```
81
+ {
82
+ 'id': '42609-0',
83
+ 'title': 'Mumbaj',
84
+ 'text': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.'
85
+ }
86
+ ```
87
 
88
  ### Data Fields
89
 
90
+ Question-passage pairs:
91
+
92
+ - `question_id`: an integer id of the question
93
+ - `passage_title`: a string containing the title of the Wikipedia article
94
+ - `passage_text`: a string containing the passage text as extracted by the human annotator
95
+ - `passage_wiki`: a string containing the passage text as it can be found in the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
96
+ - `passage_id`: a string containing the id of the passage from the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
97
+ - `duplicate`: a boolean flag representing whether a question-passage pair is duplicated in the dataset. This occurs when the same passage was found in multiple passage sources.
98
+ - `question`: a string containing the question
99
+ - `relevant`: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)
100
+ - `annotated_by`: a string containing the name of the annotator who verified the relevance of the pair
101
+ - `answers`: a string containing a list of possible short answers to the question
102
+ - `question_formulation`: a string containing a kind of expression used to request information. One of the following:
103
+ - `QUESTION`, e.g. *What is the name of the first letter of the Greek alphabet?*
104
+ - `COMMAND`, e.g. *Expand the abbreviation ’CIA’.*
105
+ - `COMPOUND`, e.g. *This French writer, born in the 19th century, is
106
+ considered a pioneer of sci-fi literature. What is his name?*
107
+ - `question_type`: a string indicating what type of information is sought by the question. One of the following:
108
+ - `SINGLE ENTITY`, e.g. *Who is the hero in the Tomb Rider video game series?*
109
+ - `MULTIPLE ENTITIES`, e.g. *Which two seas are linked by the Corinth Canal?*
110
+ - `ENTITY CHOICE`, e.g. *Is "Sombrero" a type of dance, a hat, or a dish?*
111
+ - `YES/NO`, e.g. *When the term of office of the Polish Sejm is terminated, does it apply to the Senate as well?*
112
+ - `OTHER NAME`, e.g. *What was the nickname of Louis I, the King of the Franks?*
113
+ - `GAP FILLING`, e.g. *Finish the proverb: "If you fly with the crows... ".*
114
+ - `entity_type`: a string containing a type of the sought entity. One of the following: `NAMED`, `UNNAMED`, or `YES/NO`.
115
+ - `entity_subtype`: a string containing a subtype of the sought entity. Can take one of the 34 different values.
116
+ - `split`: a string containing the split of the dataset. One of the following: `train`, `valid`, or `test`.
117
+ - `passage_source`: a string containing the source of the passage. One of the following:
118
+ - `human`: the passage was proposed by a human annotator using any
119
+ internal (i.e. Wikipedia search) or external (e.g. Google) search engines and any keywords or queries they considered useful
120
+ - `hard-negatives`: the passage was proposed using a neural retriever trained on the passages found by the human annotators
121
+ - `zero-shot`: the passage was proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2)
122
+
123
+ Corpus of passages:
124
+
125
+ - `id`: a string representing the Wikipedia article id and the index of extracted passage. Matches the `passage_id` from the main part of the dataset.
126
+ - `title`: a string containing the title of the Wikipedia article. Matches the `passage_title` from the main part of the dataset.
127
+ - `text`: a string containing the passage text. Matches the `passage_wiki` from the main part of the dataset.
128
 
129
  ### Data Splits
130
 
131
+ The questions are assigned into one of three splits: `train`, `validation`, and `test`. The `validation` and `test` questions are randomly sampled from the `test-B` dataset from the [PolEval 2021](https://2021.poleval.pl/tasks/task4) competition.
132
+
133
+ | | # questions | # positive passages | # negative passages |
134
+ |------------|------------:|--------------------:|--------------------:|
135
+ | training | 5,000 | 27,131 | 34,904 |
136
+ | validation | 1,000 | 5,839 | 6,927 |
137
+ | text | 1,000 | 5,938 | 6,786 |
138
+
139
 
140
  ## Dataset Creation
141
 
142
  ### Curation Rationale
143
 
144
+ The PolQA dataset was created to support and promote the research in the open-domain question answering for Polish. It also serves as a benchmark to evaluate OpenQA systems.
145
 
146
  ### Source Data
147
 
148
  #### Initial Data Collection and Normalization
149
 
150
+ The majority of questions come from two existing resources, the
151
+ 6,000 questions from the [PolEval 2021 shared task on QA](https://2021.poleval.pl/tasks/task4) and additional 1,000 questions gathered by one of the shared
152
+ task [participants](http://poleval.pl/files/poleval2021.pdf#page=151). Originally, the questions come from collections associated with TV shows, both officially published and gathered online by their fans, as well as questions used in actual quiz competitions, on TV or online.
153
+
154
+ The evidence passages come from the Polish Wikipedia (March 2022 snapshot). The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
155
 
156
  #### Who are the source language producers?
157
 
158
+ The questions come from various sources and their authors are unknown but are mostly analogous (or even identical) to questions asked during the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show.
159
+
160
+ The passages were written by the editors of the Polish Wikipedia.
161
 
162
  ### Annotations
163
 
164
  #### Annotation process
165
 
166
+ Two approaches were used to annotate the question-passage pairs. Each of them consists of two phases: the retrieval of candidate passages and the manual verification of their relevance.
167
+
168
+ In the first approach, we asked annotators to use internal (i.e. Wikipedia search) or external (e.g. Google) search engines to find up to five relevant passages using any keywords or queries they consider useful (`passage_source="human"`). Based on those passages, we trained the neural retriever to extend the number of relevant passages, as well as to retrieve the hard negatives (`passage_source="hard-negatives"`).
169
+
170
+ In the second approach, the passage candidates were proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) (`passage_source="zero-shot"`).
171
+
172
+ In both cases, all proposed question-passage pairs were manually verified by the annotators.
173
 
174
  #### Who are the annotators?
175
 
176
+ The annotation team consisted of 16 annotators, all native Polish
177
+ speakers, most of them having linguistic backgrounds and previous
178
+ experience as an annotator.
179
 
180
  ### Personal and Sensitive Information
181
 
182
+ The dataset does not contain any personal or sensitive information.
183
 
184
  ## Considerations for Using the Data
185
 
186
  ### Social Impact of Dataset
187
 
188
+ This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.
189
 
190
  ### Discussion of Biases
191
 
192
+ The passages proposed by the `hard-negative` and `zero-shot` methods are bound to be easier to retrieve by retrievers since they were proposed by such. To mitigate this bias, we include the passages found by the human annotators in an unconstrained way (`passage_source="human"`). We hypothesize that it will result in more unbiased and diverse examples. Moreover, we asked the annotators to find not one but up to five passages, preferably from different articles to even further increase passage diversity.
193
 
194
  ### Other Known Limitations
195
 
196
+ The PolQA dataset focuses on trivia questions which might limit its usefulness in real-world applications since neural retrievers generalize poorly to other domains.
197
 
198
  ## Additional Information
199
 
200
  ### Dataset Curators
201
 
202
+ The PolQA dataset was developed by Piotr Rybak, Piotr Przybyła, and Maciej Ogrodniczuk from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).
203
 
204
  ### Licensing Information
205
 
 
207
 
208
  ### Citation Information
209
 
210
+ ```
211
+ @misc{rybak2022improving,
212
+ title={Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies},
213
+ author={Piotr Rybak and Piotr Przybyła and Maciej Ogrodniczuk},
214
+ year={2022},
215
+ eprint={2212.08897},
216
+ archivePrefix={arXiv},
217
+ primaryClass={cs.CL}
218
+ }
219
+ ```