|
--- |
|
language: |
|
- en |
|
pretty_name: CANNOT |
|
--- |
|
# Dataset Card for CANNOT |
|
|
|
## Dataset Description |
|
|
|
- **Homepage: https://github.com/dmlls/cannot-dataset** |
|
- **Repository: https://github.com/dmlls/cannot-dataset** |
|
- **Paper: tba** |
|
|
|
### Dataset Summary |
|
|
|
|
|
**CANNOT** is a dataset that focuses on negated textual pairs. It currently |
|
contains **77,376 samples**, of which roughly of them are negated pairs of |
|
sentences, and the other half are not (they are paraphrased versions of each |
|
other). |
|
|
|
The most frequent negation that appears in the dataset is verbal negation (e.g., |
|
will → won't), although it also contains pairs with antonyms (cold → hot). |
|
|
|
### Languages |
|
CANNOT includes exclusively texts in **English**. |
|
|
|
## Dataset Structure |
|
|
|
The dataset is given as a |
|
[`.tsv`](https://en.wikipedia.org/wiki/Tab-separated_values) file with the |
|
following structure: |
|
|
|
| premise | hypothesis | label | |
|
|:------------|:---------------------------------------------------|:-----:| |
|
| A sentence. | An equivalent, non-negated sentence (paraphrased). | 0 | |
|
| A sentence. | The sentence negated. | 1 | |
|
|
|
|
|
The dataset can be easily loaded into a Pandas DataFrame by running: |
|
|
|
```Python |
|
import pandas as pd |
|
|
|
dataset = pd.read_csv('negation_dataset_v1.0.tsv', sep='\t') |
|
|
|
``` |
|
|
|
## Dataset Creation |
|
|
|
|
|
The dataset has been created by cleaning up and merging the following datasets: |
|
|
|
1. _Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal |
|
Negation_ (see |
|
[`datasets/nan-nli`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/nan-nli)). |
|
|
|
2. _GLUE Diagnostic Dataset_ (see |
|
[`datasets/glue-diagnostic`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/glue-diagnostic)). |
|
|
|
3. _Automated Fact-Checking of Claims from Wikipedia_ (see |
|
[`datasets/wikifactcheck-english`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/wikifactcheck-english)). |
|
|
|
4. _From Group to Individual Labels Using Deep Features_ (see |
|
[`datasets/sentiment-labelled-sentences`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/sentiment-labelled-sentences)). |
|
In this case, the negated sentences were obtained by using the Python module |
|
[`negate`](https://github.com/dmlls/negate). |
|
|
|
|
|
Additionally, for each of the negated samples, another pair of non-negated |
|
sentences has been added by paraphrasing them with the pre-trained model |
|
[`🤗tuner007/pegasus_paraphrase`](https://huggingface.co/tuner007/pegasus_paraphrase). |
|
|
|
Furthermore, the dataset from _It Is Not Easy To Detect Paraphrases: Analysing |
|
Semantic Similarity With Antonyms and Negation Using the New SemAntoNeg |
|
Benchmark_ (see |
|
[`datasets/antonym-substitution`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/antonym-substitution)) |
|
has also been included. This dataset already provides both the paraphrased and |
|
negated version for each premise, so no further processing was needed. |
|
|
|
Finally, the swapped version of each pair (premise ⇋ hypothesis) has also been |
|
included, and any duplicates have been removed. |
|
|
|
The contribution of each of these individual datasets to the final CANNOT |
|
dataset is: |
|
|
|
| Dataset | Samples | |
|
|:--------------------------------------------------------------------------|-----------:| |
|
| Not another Negation Benchmark | 118 | |
|
| GLUE Diagnostic Dataset | 154 | |
|
| Automated Fact-Checking of Claims from Wikipedia | 14,970 | |
|
| From Group to Individual Labels Using Deep Features | 2,110 | |
|
| It Is Not Easy To Detect Paraphrases | 8,597 | |
|
| <p align="right"><b>Total</b></p> | **25,949** | |
|
|
|
_Note_: The numbers above include only the original queries present in the |
|
datasets. |
|
|
|
|
|
## Additional Information |
|
|
|
### Licensing Information |
|
|
|
TODO |
|
|
|
### Citation Information |
|
|
|
tba |
|
|
|
### Contributions |
|
|
|
Contributions to the dataset can be submitted through the [project repository](https://github.com/dmlls/cannot-dataset). |