qinchuanhui commited on
Commit
9234e36
1 Parent(s): d373dd2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -1
README.md CHANGED
@@ -124,4 +124,96 @@ configs:
124
  data_files:
125
  - split: test
126
  path: tat/test*
127
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  data_files:
125
  - split: test
126
  path: tat/test*
127
+ ---
128
+ # Dataset Card for Dataset Name
129
+
130
+ UDA is a benchmark suite for Retrieval Augmented Generation (RAG) in real-world document analysis, which involves 2965 documents and 29590 expert-annotated Q&A pairs.
131
+ It includes six sub-datasets across three pivotal domains: finance, academia, and knowledge bases.
132
+ Each data item within UDA is structured as a triplet: document-question-answer pair.
133
+ The documents are retained in their original file formats without parsing or segmentation, to mirror the authenticity of real-world applications.
134
+
135
+ ## Dataset Details
136
+
137
+ ### Dataset Description
138
+
139
+ <!-- Provide a longer summary of what this dataset is. -->
140
+
141
+
142
+
143
+ - **Curated by:** Yulong Hui, Tsinghua University
144
+ - **Language(s) (NLP):** English
145
+ - **License:** CC-BY-SA-4.0
146
+ - **Repository:** https://github.com/qinchuanhui/UDA-Benchmark
147
+
148
+ ## Uses
149
+
150
+ ### Direct Use
151
+
152
+ Question-answering tasks on complete unstructured documents.
153
+
154
+ After loading the dataset, you should also **download the sourc document files from the folder `src_doc_files`**.
155
+
156
+
157
+ ### Extended Use
158
+
159
+ Evaluate the effectiveness of retrieval strategies using the evidence provided in the `extended_qa_info` folder.
160
+
161
+ Directly assess the performance of LLMs in numerical reasoning and table reasoning, using the evidence in the `extended_qa_info` folder as context.
162
+
163
+ Assess the effectiveness of parsing strategies on unstructured PDF documents.
164
+
165
+
166
+ ## Dataset Structure
167
+
168
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
169
+
170
+ Field Name | Field Value | Description| Example
171
+ --- | --- | ---|---
172
+ doc_name | string | name of the source document | 1912.01214
173
+ q_uid | string | unique id of the question | 9a05a5f4351db75da371f7ac12eb0b03607c4b87
174
+ question | string | raised question | which datasets did they experiment with?
175
+ answer <br />or answer_1, answer_2 <br />or short_answer, long_answer | string | ground truth answer/answers | Europarl, MultiUN
176
+
177
+ **Additional Notes:** Some sub-datasets may have multiple ground_truth answers, where the answers are organized as `answer_1`, `answer_2` (in fin, paper_tab and paper_text) or `short_answer`, `long_answer` (in nq); In sub-dataset tat, the answer is organized as a sequence, due to the involvement of the multi-span Q&A type.
178
+
179
+
180
+ ## Dataset Creation
181
+
182
+ ### Source Data
183
+
184
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
185
+
186
+ #### Data Collection and Processing
187
+
188
+ We collect the Q&A labels from the open-released datasets (i.e., source datasets), which are all annotated by human participants.
189
+ Then we conduct a series of essential constructing actions, including source-document identification, categorization, filtering, data transformation.
190
+
191
+ #### Who are the source data producers?
192
+
193
+ [1]CHEN, Z., CHEN, W., SMILEY, C., SHAH, S., BOROVA, I., LANGDON, D., MOUSSA, R., BEANE, M., HUANG, T.-H., ROUTLEDGE, B., ET AL. Finqa: A dataset of numerical reasoning over financial data. arXiv preprint arXiv:2109.00122 (2021).
194
+
195
+ [2]ZHU, F., LEI, W., FENG, F., WANG, C., ZHANG, H., AND CHUA, T.-S. Towards complex document understanding by discrete reasoning. In Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 4857–4866.
196
+
197
+ [3]DASIGI, P., LO, K., BELTAGY, I., COHAN, A., SMITH, N. A., AND GARDNER, M. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 (2021).
198
+
199
+ [4]NAN, L., HSIEH, C., MAO, Z., LIN, X. V., VERMA, N., ZHANG, R., KRYS ́ CIN ́ SKI, W., SCHOELKOPF, H., KONG, R., TANG, X., ET AL. Fetaqa: Free-form table question answering. Transactions of the Association for Computational Linguistics 10 (2022), 35–49.
200
+
201
+ [5]KWIATKOWSKI, T., PALOMAKI, J., REDFIELD, O., COLLINS, M., PARIKH, A., ALBERTI, C., EPSTEIN, D., POLOSUKHIN, I., DEVLIN, J., LEE, K., ET AL. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 453–466.
202
+
203
+
204
+ ## Considerations for Using the Data
205
+ #### Personal and Sensitive Information
206
+
207
+ The dataset doesn't contain data that might be considered personal, sensitive, or private. The source of data are public available reports, papers and wikipedia pages.
208
+
209
+
210
+ <!-- ## Citation [optional] -->
211
+
212
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
213
+
214
+ <!-- **BibTeX:** -->
215
+
216
+
217
+ ## Dataset Card Contact
218
+
219