File size: 6,778 Bytes
9bf9071 3101f7f 9bf9071 9234e36 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
---
license: cc-by-sa-4.0
task_categories:
- question-answering
- table-question-answering
- text-generation
language:
- en
tags:
- croissant
pretty_name: UDA-QA
size_categories:
- 10K<n<100K
config_names:
- feta
- nq
- paper_text
- paper_tab
- fin
- tat
dataset_info:
- config_name: feta
features:
- name: doc_name
dtype: string
- name: q_uid
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: doc_url
dtype: string
- config_name: nq
features:
- name: doc_name
dtype: string
- name: q_uid
dtype: string
- name: question
dtype: string
- name: short_answer
dtype: string
- name: long_answer
dtype: string
- name: doc_url
dtype: string
- config_name: paper_text
features:
- name: doc_name
dtype: string
- name: q_uid
dtype: string
- name: question
dtype: string
- name: answer_1
dtype: string
- name: answer_2
dtype: string
- name: answer_3
dtype: string
- config_name: paper_tab
features:
- name: doc_name
dtype: string
- name: q_uid
dtype: string
- name: question
dtype: string
- name: answer_1
dtype: string
- name: answer_2
dtype: string
- name: answer_3
dtype: string
- config_name: fin
features:
- name: doc_name
dtype: string
- name: q_uid
dtype: string
- name: question
dtype: string
- name: answer_1
dtype: string
- name: answer_2
dtype: string
# - config_name: tat
# features:
# - name: doc_name
# dtype: string
# - name: q_uid
# dtype: string
# - name: question
# dtype: string
# - name: answer
# - name: answer_scale
# dtype: string
# - name: answer_type
# dtype: string
configs:
- config_name: feta
data_files:
- split: test
path: feta/test*
- config_name: nq
data_files:
- split: test
path: nq/test*
- config_name: paper_text
data_files:
- split: test
path: paper_text/test*
- config_name: paper_tab
data_files:
- split: test
path: paper_tab/test*
- config_name: fin
data_files:
- split: test
path: fin/test*
- config_name: tat
data_files:
- split: test
path: tat/test*
---
# Dataset Card for Dataset Name
UDA is a benchmark suite for Retrieval Augmented Generation (RAG) in real-world document analysis, which involves 2965 documents and 29590 expert-annotated Q&A pairs.
It includes six sub-datasets across three pivotal domains: finance, academia, and knowledge bases.
Each data item within UDA is structured as a triplet: document-question-answer pair.
The documents are retained in their original file formats without parsing or segmentation, to mirror the authenticity of real-world applications.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Yulong Hui, Tsinghua University
- **Language(s) (NLP):** English
- **License:** CC-BY-SA-4.0
- **Repository:** https://github.com/qinchuanhui/UDA-Benchmark
## Uses
### Direct Use
Question-answering tasks on complete unstructured documents.
After loading the dataset, you should also **download the sourc document files from the folder `src_doc_files`**.
### Extended Use
Evaluate the effectiveness of retrieval strategies using the evidence provided in the `extended_qa_info` folder.
Directly assess the performance of LLMs in numerical reasoning and table reasoning, using the evidence in the `extended_qa_info` folder as context.
Assess the effectiveness of parsing strategies on unstructured PDF documents.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Field Name | Field Value | Description| Example
--- | --- | ---|---
doc_name | string | name of the source document | 1912.01214
q_uid | string | unique id of the question | 9a05a5f4351db75da371f7ac12eb0b03607c4b87
question | string | raised question | which datasets did they experiment with?
answer <br />or answer_1, answer_2 <br />or short_answer, long_answer | string | ground truth answer/answers | Europarl, MultiUN
**Additional Notes:** Some sub-datasets may have multiple ground_truth answers, where the answers are organized as `answer_1`, `answer_2` (in fin, paper_tab and paper_text) or `short_answer`, `long_answer` (in nq); In sub-dataset tat, the answer is organized as a sequence, due to the involvement of the multi-span Q&A type.
## Dataset Creation
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
We collect the Q&A labels from the open-released datasets (i.e., source datasets), which are all annotated by human participants.
Then we conduct a series of essential constructing actions, including source-document identification, categorization, filtering, data transformation.
#### Who are the source data producers?
[1]CHEN, Z., CHEN, W., SMILEY, C., SHAH, S., BOROVA, I., LANGDON, D., MOUSSA, R., BEANE, M., HUANG, T.-H., ROUTLEDGE, B., ET AL. Finqa: A dataset of numerical reasoning over financial data. arXiv preprint arXiv:2109.00122 (2021).
[2]ZHU, F., LEI, W., FENG, F., WANG, C., ZHANG, H., AND CHUA, T.-S. Towards complex document understanding by discrete reasoning. In Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 4857–4866.
[3]DASIGI, P., LO, K., BELTAGY, I., COHAN, A., SMITH, N. A., AND GARDNER, M. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 (2021).
[4]NAN, L., HSIEH, C., MAO, Z., LIN, X. V., VERMA, N., ZHANG, R., KRYS ́ CIN ́ SKI, W., SCHOELKOPF, H., KONG, R., TANG, X., ET AL. Fetaqa: Free-form table question answering. Transactions of the Association for Computational Linguistics 10 (2022), 35–49.
[5]KWIATKOWSKI, T., PALOMAKI, J., REDFIELD, O., COLLINS, M., PARIKH, A., ALBERTI, C., EPSTEIN, D., POLOSUKHIN, I., DEVLIN, J., LEE, K., ET AL. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 453–466.
## Considerations for Using the Data
#### Personal and Sensitive Information
The dataset doesn't contain data that might be considered personal, sensitive, or private. The source of data are public available reports, papers and wikipedia pages.
<!-- ## Citation [optional] -->
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
<!-- **BibTeX:** -->
## Dataset Card Contact
[email protected] |