|
--- |
|
license: cc-by-sa-4.0 |
|
task_categories: |
|
- question-answering |
|
- table-question-answering |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- croissant |
|
pretty_name: UDA-QA |
|
size_categories: |
|
- 10K<n<100K |
|
config_names: |
|
- feta |
|
- nq |
|
- paper_text |
|
- paper_tab |
|
- fin |
|
- tat |
|
dataset_info: |
|
- config_name: feta |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
- name: doc_url |
|
dtype: string |
|
- config_name: nq |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: short_answer |
|
dtype: string |
|
- name: long_answer |
|
dtype: string |
|
- name: doc_url |
|
dtype: string |
|
- config_name: paper_text |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer_1 |
|
dtype: string |
|
- name: answer_2 |
|
dtype: string |
|
- name: answer_3 |
|
dtype: string |
|
- config_name: paper_tab |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer_1 |
|
dtype: string |
|
- name: answer_2 |
|
dtype: string |
|
- name: answer_3 |
|
dtype: string |
|
- config_name: fin |
|
features: |
|
- name: doc_name |
|
dtype: string |
|
- name: q_uid |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: answer_1 |
|
dtype: string |
|
- name: answer_2 |
|
dtype: string |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
configs: |
|
- config_name: feta |
|
data_files: |
|
- split: test |
|
path: feta/test* |
|
- config_name: nq |
|
data_files: |
|
- split: test |
|
path: nq/test* |
|
- config_name: paper_text |
|
data_files: |
|
- split: test |
|
path: paper_text/test* |
|
- config_name: paper_tab |
|
data_files: |
|
- split: test |
|
path: paper_tab/test* |
|
- config_name: fin |
|
data_files: |
|
- split: test |
|
path: fin/test* |
|
- config_name: tat |
|
data_files: |
|
- split: test |
|
path: tat/test* |
|
--- |
|
# Dataset Card for Dataset Name |
|
|
|
UDA is a benchmark suite for Retrieval Augmented Generation (RAG) in real-world document analysis, which involves 2965 documents and 29590 expert-annotated Q&A pairs. |
|
It includes six sub-datasets across three pivotal domains: finance, academia, and knowledge bases. |
|
Each data item within UDA is structured as a triplet: document-question-answer pair. |
|
The documents are retained in their original file formats without parsing or segmentation, to mirror the authenticity of real-world applications. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
|
|
- **Curated by:** Yulong Hui, Tsinghua University |
|
- **Language(s) (NLP):** English |
|
- **License:** CC-BY-SA-4.0 |
|
- **Repository:** https://github.com/qinchuanhui/UDA-Benchmark |
|
|
|
## Uses |
|
|
|
### Direct Use |
|
|
|
Question-answering tasks on complete unstructured documents. |
|
|
|
After loading the dataset, you should also **download the sourc document files from the folder `src_doc_files`**. |
|
|
|
|
|
### Extended Use |
|
|
|
Evaluate the effectiveness of retrieval strategies using the evidence provided in the `extended_qa_info` folder. |
|
|
|
Directly assess the performance of LLMs in numerical reasoning and table reasoning, using the evidence in the `extended_qa_info` folder as context. |
|
|
|
Assess the effectiveness of parsing strategies on unstructured PDF documents. |
|
|
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
Field Name | Field Value | Description| Example |
|
--- | --- | ---|--- |
|
doc_name | string | name of the source document | 1912.01214 |
|
q_uid | string | unique id of the question | 9a05a5f4351db75da371f7ac12eb0b03607c4b87 |
|
question | string | raised question | which datasets did they experiment with? |
|
answer <br />or answer_1, answer_2 <br />or short_answer, long_answer | string | ground truth answer/answers | Europarl, MultiUN |
|
|
|
**Additional Notes:** Some sub-datasets may have multiple ground_truth answers, where the answers are organized as `answer_1`, `answer_2` (in fin, paper_tab and paper_text) or `short_answer`, `long_answer` (in nq); In sub-dataset tat, the answer is organized as a sequence, due to the involvement of the multi-span Q&A type. |
|
|
|
|
|
## Dataset Creation |
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
#### Data Collection and Processing |
|
|
|
We collect the Q&A labels from the open-released datasets (i.e., source datasets), which are all annotated by human participants. |
|
Then we conduct a series of essential constructing actions, including source-document identification, categorization, filtering, data transformation. |
|
|
|
#### Who are the source data producers? |
|
|
|
[1]CHEN, Z., CHEN, W., SMILEY, C., SHAH, S., BOROVA, I., LANGDON, D., MOUSSA, R., BEANE, M., HUANG, T.-H., ROUTLEDGE, B., ET AL. Finqa: A dataset of numerical reasoning over financial data. arXiv preprint arXiv:2109.00122 (2021). |
|
|
|
[2]ZHU, F., LEI, W., FENG, F., WANG, C., ZHANG, H., AND CHUA, T.-S. Towards complex document understanding by discrete reasoning. In Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 4857–4866. |
|
|
|
[3]DASIGI, P., LO, K., BELTAGY, I., COHAN, A., SMITH, N. A., AND GARDNER, M. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 (2021). |
|
|
|
[4]NAN, L., HSIEH, C., MAO, Z., LIN, X. V., VERMA, N., ZHANG, R., KRYS ́ CIN ́ SKI, W., SCHOELKOPF, H., KONG, R., TANG, X., ET AL. Fetaqa: Free-form table question answering. Transactions of the Association for Computational Linguistics 10 (2022), 35–49. |
|
|
|
[5]KWIATKOWSKI, T., PALOMAKI, J., REDFIELD, O., COLLINS, M., PARIKH, A., ALBERTI, C., EPSTEIN, D., POLOSUKHIN, I., DEVLIN, J., LEE, K., ET AL. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 453–466. |
|
|
|
|
|
## Considerations for Using the Data |
|
#### Personal and Sensitive Information |
|
|
|
The dataset doesn't contain data that might be considered personal, sensitive, or private. The source of data are public available reports, papers and wikipedia pages. |
|
|
|
|
|
<!-- ## Citation [optional] --> |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
<!-- **BibTeX:** --> |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
[email protected] |