Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,196 Bytes
9936656
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
language:
- en
pretty_name: CANNOT
---
# Dataset Card for CANNOT

## Dataset Description

- **Homepage: https://github.com/dmlls/cannot-dataset** 
- **Repository: https://github.com/dmlls/cannot-dataset** 
- **Paper: tba**

### Dataset Summary


**CANNOT** is a dataset that focuses on negated textual pairs. It currently
contains **77,376 samples**, of which roughly of them are negated pairs of
sentences, and the other half are not (they are paraphrased versions of each
other).

The most frequent negation that appears in the dataset is verbal negation (e.g.,
will → won't), although it also contains pairs with antonyms (cold → hot).

### Languages
CANNOT includes exclusively texts in **English**.

## Dataset Structure

The dataset is given as a
[`.tsv`](https://en.wikipedia.org/wiki/Tab-separated_values) file with the
following structure:

| premise     | hypothesis                                         | label |
|:------------|:---------------------------------------------------|:-----:|
| A sentence. | An equivalent, non-negated sentence (paraphrased). | 0     |
| A sentence. | The sentence negated.                              | 1     |


The dataset can be easily loaded into a Pandas DataFrame by running:

```Python
import pandas as pd

dataset = pd.read_csv('negation_dataset_v1.0.tsv', sep='\t')

```

## Dataset Creation


The dataset has been created by cleaning up and merging the following datasets:

1. _Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal
Negation_ (see
[`datasets/nan-nli`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/nan-nli)).

2. _GLUE Diagnostic Dataset_ (see
[`datasets/glue-diagnostic`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/glue-diagnostic)).

3. _Automated Fact-Checking of Claims from Wikipedia_ (see
[`datasets/wikifactcheck-english`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/wikifactcheck-english)).

4. _From Group to Individual Labels Using Deep Features_ (see
[`datasets/sentiment-labelled-sentences`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/sentiment-labelled-sentences)).
In this case, the negated sentences were obtained by using the Python module
[`negate`](https://github.com/dmlls/negate).


Additionally, for each of the negated samples, another pair of non-negated
sentences has been added by paraphrasing them with the pre-trained model
[`🤗tuner007/pegasus_paraphrase`](https://huggingface.co/tuner007/pegasus_paraphrase).

Furthermore, the dataset from _It Is Not Easy To Detect Paraphrases: Analysing
Semantic Similarity With Antonyms and Negation Using the New SemAntoNeg
Benchmark_ (see
[`datasets/antonym-substitution`](https://github.com/dmlls/cannot-dataset/tree/main/datasets/antonym-substitution))
has also been included. This dataset already provides both the paraphrased and
negated version for each premise, so no further processing was needed.

Finally, the swapped version of each pair (premise ⇋ hypothesis) has also been
included, and any duplicates have been removed.

The contribution of each of these individual datasets to the final CANNOT
dataset is:

| Dataset                                                                   | Samples    |
|:--------------------------------------------------------------------------|-----------:|
| Not another Negation Benchmark                                            |      118   |
| GLUE Diagnostic Dataset                                                   |      154   |
| Automated Fact-Checking of Claims from Wikipedia                          |   14,970   |
| From Group to Individual Labels Using Deep Features                       |    2,110   |
| It Is Not Easy To Detect Paraphrases                                      |    8,597   |
| <p align="right"><b>Total</b></p>                                         | **25,949** |

_Note_: The numbers above include only the original queries present in the
datasets.


## Additional Information

### Licensing Information

TODO

### Citation Information

tba

### Contributions

Contributions to the dataset can be submitted through the [project repository](https://github.com/dmlls/cannot-dataset).