Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
File size: 7,046 Bytes
04c1a9e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7686699
 
 
 
 
04c1a9e
 
 
 
de32b11
 
 
 
 
 
 
04c1a9e
 
 
de32b11
04c1a9e
 
 
 
 
de32b11
04c1a9e
 
 
 
 
7686699
 
 
de32b11
 
04c1a9e
 
 
7686699
 
 
de32b11
04c1a9e
 
 
 
 
7686699
 
 
de32b11
04c1a9e
 
 
7686699
 
 
de32b11
04c1a9e
 
 
de32b11
7686699
 
 
de32b11
04c1a9e
 
 
 
 
de32b11
04c1a9e
 
 
7686699
 
 
de32b11
04c1a9e
 
 
de32b11
7686699
 
 
de32b11
04c1a9e
 
 
 
 
de32b11
04c1a9e
 
 
de32b11
7686699
 
 
de32b11
04c1a9e
 
 
de32b11
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
annotations_creators:

- crowdsourced

- other

language_creators:
- other
- crowdsourced
languages:
- es
licenses:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ESnli

size_categories:
- unknown
source_datasets:

- extended|snli

- extended|xnli

- extended|multi_nli
task_categories:

- text-classification

task_ids:
- natural-language-inference

# Dataset Card for nli-es

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-instances)
  - [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## Dataset Description

- **Homepage:** [Needs More Information]
- **Repository:** https://huggingface.co/datasets/hackathon-pln-es/nli-es/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]

### Dataset Summary

A Spanish Natural Language Inference dataset put together from the sources:
 - the Spanish slice of the XNLI dataset;
 - machine-translated Spanish version of the SNLI dataset
 - machine-translated Spanish version of the Multinli dataset

### Supported Tasks and Leaderboards

[Needs More Information]

### Languages

A small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.

## Dataset Structure

### Data Instances

A line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two ("gold_label") and the ID number of the pair of sentences as given in the original dataset.



Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.



{  

 "gold_label": "neutral",  
 "pairID": 1,  
 "sentence1": "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.",  
 "sentence2": "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."  
}

### Data Fields

gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.



pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.



sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)

sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)



### Data Splits



The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.



## Dataset Creation



### Curation Rationale



This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.



### Source Data



#### Initial Data Collection and Normalization



Please refer to the respective documentations of the original datasets:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/





#### Who are the source language producers?



Please refer to the respective documentations of the original datasets:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/



### Annotations



#### Annotation process



Please refer to the respective documentations of the original datasets:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/



#### Who are the annotators?



Please refer to the respective documentations of the original datasets:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/



### Personal and Sensitive Information



In general, no sensitive information is conveyed in the sentences.

Please refer to the respective documentations of the original datasets:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/



## Considerations for Using the Data



### Social Impact of Dataset



The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.



### Discussion of Biases



Please refer to the respective documentations of the original datasets:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/



### Other Known Limitations



The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.

For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/



## Additional Information



### Dataset Curators



The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.



### Licensing Information



This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).

Please refer to the respective documentations of the original datasets for information on their licenses:  

https://nlp.stanford.edu/projects/snli/  

https://arxiv.org/pdf/1809.05053.pdf  

https://cims.nyu.edu/~sbowman/multinli/



### Citation Information



If you need to cite this dataset, you can link to this readme.