Datasets:

Modalities:
Text
Libraries:
Datasets
License:
UKP_ASPECT / README.md
ukp-data-admin's picture
Update README.md
f8994af
|
raw
history blame
7.08 kB
metadata
license: cc-by-nc-3.0

Dataset Card for UKP ASPECT

Table of Contents

Dataset Description

Dataset Summary

The UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper).

Supported Tasks and Leaderboards

This dataset supports the following tasks:

  • Sentence pair classification
  • Topic classification

Languages

English

Dataset Structure

Data Instances

Each instance consists of a topic, a pair of sentences, and an argument similarity label.

{"3d printing";"This could greatly increase the quality of life of those currently living in less than ideal conditions.";"The advent and spread of new technologies, like that of 3D printing can transform our lives in many ways.";"DTORCD"}

Data Fields

  • topic: the topic keywords used to retrieve the documents
  • sentence_1: the first sentence of the pair
  • sentence_2: the second sentence of the pair
  • label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS)
    • Different Topic/Can’t decide (DTORCD): Either one or both of the sentences belong to a topic different than the given one, or you can’t understand one or both sentences. If you choose this option, you need to very briefly explain, why you chose it (e.g.“The second sentence is not grammatical”, “The first sentence is from a different topic” etc.).
    • No Similarity (NS): The two arguments belong to the same topic, but they don’t show any similarity, i.e. they speak aboutcompletely different aspects of the topic
    • Some Similarity (SS): The two arguments belong to the same topic, showing semantic similarity on a few aspects, but thecentral message is rather different, or one argument is way less specific than the other
    • High Similarity (HS): The two arguments belong to the same topic, and they speak about the same aspect, e.g. using different words

Data Splits

The dataset currently does not contain standard data splits.

Dataset Creation

Curation Rationale

This dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering.

Source Data

Initial Data Collection and Normalization

The UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText system (Stab et al., 2018). The ArgumenText system expects as input an arbitrary topic (query) and searches a large web crawl for relevant documents. Finally, it classifies all sentences contained in the most relevant documents for a given query into pro, con or non-arguments (with regard to the given topic).

We picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision approach. For each of our 28 topics, we applied a sampling strategy that picks randomly two pro or con argument sentences at random, calculates their similarity using the system by Misra et al. (2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity scale. This was repeated until we reached 3,595 arguments pairs, about 130 pairs for each topic.

Who are the source language producers?

Unidentified contributors to the world wide web.

Annotations

Annotation process

The argument pairs were annotated on a range of three degrees of similarity (no, some, and high similarity) with the help of crowd workers on the Amazon Mechanical Turk platform. To account for unrelated pairs due to the sampling process, crowd workers could choose a fourth option. We collected seven assignments per pair and used Multi-Annotator Competence Estimation (MACE) with a threshold of 1.0 (Hovy et al., 2013) to consolidate votes into a gold standard.

Who are the annotators?

Crowd workers on Amazon Mechanical Turk

Personal and Sensitive Information

This dataset is fully anonymized.

Additional Information

You can download the data via:

from datasets import load_dataset

dataset = load_dataset("UKPLab/UKP_ASPECT")

Please find more information about the code and how the data was collected in the paper.

Dataset Curators

Curation is managed by our data manager at UKP.

Licensing Information

CC-by-NC 3.0

Citation Information

Please cite this data using:

@inproceedings{reimers2019classification,
  title={Classification and Clustering of Arguments with Contextualized Word Embeddings},
  author={Reimers, Nils and Schiller, Benjamin and Beck, Tilman and Daxenberger, Johannes and Stab, Christian and Gurevych, Iryna},
  booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
  pages={567--578},
  year={2019}
}

Contributions

Thanks to @buenalaune for adding this dataset.

Tags

annotations_creators:

  • crowdsourced

language:

  • en

language_creators:

  • found

license:

  • cc-by-nc-3.0

multilinguality:

  • monolingual

pretty_name: UKP ASPECT Corpus

size_categories:

  • 1K<n<10K

source_datasets:

  • original

tags:

  • argument pair
  • argument similarity

task_categories:

  • text-classification

task_ids:

  • topic-classification
  • multi-input-text-classification
  • semantic-similarity-classification