carmentano commited on
Commit
5bb7e76
1 Parent(s): 47aee9d

Upload 5 files

Browse files
Files changed (5) hide show
  1. NLUCat.py +81 -0
  2. README.md +278 -0
  3. dev.json +0 -0
  4. test.json +0 -0
  5. train.json +0 -0
NLUCat.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """NLUCat dataset."""
2
+
3
+
4
+ import json
5
+
6
+ import datasets
7
+
8
+
9
+ _HOMEPAGE = ""
10
+
11
+ _CITATION = """\
12
+
13
+ """
14
+
15
+ _DESCRIPTION = """\
16
+ NLUCat - Natural Language Understanding in Catalan
17
+ """
18
+
19
+ _TRAIN_FILE = "train.json"
20
+ _DEV_FILE = "dev.json"
21
+ _TEST_FILE = "test.json"
22
+
23
+ class Nlucat(datasets.GeneratorBasedBuilder):
24
+
25
+ VERSION = datasets.Version("1.0.0")
26
+
27
+ def _info(self):
28
+ return datasets.DatasetInfo(
29
+ description=_DESCRIPTION,
30
+ features=datasets.Features(
31
+ {
32
+ "example": datasets.Value("string"),
33
+ "intent": datasets.Value("string"),
34
+ "slot_text": datasets.Sequence(datasets.Value("string")),
35
+ "slot_tag": datasets.Sequence(datasets.Value("string")),
36
+ "start_char": datasets.Sequence(datasets.Value("string")),
37
+ "end_char": datasets.Sequence(datasets.Value("string"))
38
+
39
+ }
40
+ ),
41
+ homepage=_HOMEPAGE,
42
+ citation=_CITATION,
43
+ )
44
+
45
+
46
+ def _split_generators(self, dl_manager):
47
+ """Returns SplitGenerators."""
48
+ urls_to_download = {
49
+ "train": f"{_TRAIN_FILE}",
50
+ "dev": f"{_DEV_FILE}",
51
+ "test": f"{_TEST_FILE}",
52
+ }
53
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
54
+
55
+ return [
56
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"], "split": "train"}),
57
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"], "split": "validation"}),
58
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"], "split": "test"}),
59
+ ]
60
+
61
+ def _generate_examples(self, filepath, split):
62
+ """Yields examples."""
63
+ with open(filepath, encoding="utf-8") as f:
64
+ dataset = json.load(f)
65
+ for row in dataset["data"]:
66
+ example = row["example"]
67
+ intent = row["annotation"]["intent"]
68
+ slot_text = [slot["Text"] for slot in row["annotation"]["slots"]]
69
+ slot_tag = [slot["Tag"] for slot in row["annotation"]["slots"]]
70
+
71
+ start_char = [answer["Start_char"] for answer in row["annotation"]["slots"]]
72
+ end_char = [answer["End_char"] for answer in row["annotation"]["slots"]]
73
+
74
+ yield row["id"], {
75
+ "example": example,
76
+ "intent": intent,
77
+ "slot_text": slot_text,
78
+ "slot_tag": slot_tag,
79
+ "start_char": start_char,
80
+ "end_char": end_char
81
+ }
README.md CHANGED
@@ -1,3 +1,281 @@
1
  ---
 
 
 
 
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - ca
6
+ language_creators:
7
+ - expert-generated
8
  license: cc-by-4.0
9
+ multilinguality:
10
+ - monolingual
11
+ pretty_name: NLUCat - Natural Language Understanding in Catalan
12
+ size_categories:
13
+ - 10M<n<100M
14
+ source_datasets: []
15
+ tags: []
16
+ task_categories:
17
+ - text-classification
18
+ - token-classification
19
+ - text-generation
20
+ task_ids:
21
+ - intent-classification
22
+ - named-entity-recognition
23
+ - language-modeling
24
  ---
25
+
26
+ # Dataset Card for NLUCat
27
+
28
+ ## Dataset Description
29
+
30
+ - **Homepage:**
31
+ - **Repository:**
32
+ - **Paper:**
33
+ - **Leaderboard:**
34
+ - **Point of Contact:** [email protected]
35
+
36
+ ### Dataset Summary
37
+
38
+ NLUCat is a dataset of NLU in Catalan. It consists of nearly 12,000 instructions annotated with the most relevant intents and spans. Each instruction is accompanied, in addition, by the instructions received by the annotator who wrote it.
39
+
40
+ The intents taken into account are the habitual ones of a virtual home assistant (activity calendar, IOT, list management, leisure, etc.), but specific ones have also been added to take into account social and healthcare needs for vulnerable people (information on administrative procedures, menu and medication reminders, etc.).
41
+
42
+ The spans have been annotated with a tag describing the type of information they contain. They are fine-grained, but can be easily grouped to use them in robust systems.
43
+
44
+ The examples are not only written in Catalan, but they also take into account the geographical and cultural reality of the speakers of this language (geographic points, cultural references, etc.)
45
+
46
+ This dataset can be used to train models for intent classification, spans identification and examples generation.
47
+
48
+ <b>This is a simplified version of the dataset for training and evaluating intent classifiers. The full dataset and the annotation guideslines can be found in [Zenodo](https://zenodo.org/records/10362026)</b>
49
+
50
+ This dataset can be used for any purpose, whether academic or commercial, under the terms of the [CC BY 4.0]((https://creativecommons.org/licenses/by/4.0/)).
51
+ Give appropriate credit , provide a link to the license, and indicate if changes were made.
52
+
53
+ ### Supported Tasks and Leaderboards
54
+
55
+ Intent classification, spans identification and examples generation.
56
+
57
+ ### Languages
58
+
59
+ The dataset is in Catalan (ca-ES).
60
+
61
+ ## Dataset Structure
62
+
63
+ ### Data Instances
64
+
65
+ Three JSON files, one for each split.
66
+
67
+ ### Data Fields
68
+
69
+ * example: `str`. Example
70
+ * annotation: `dict`. Annotation of the example
71
+ * intent: `str`. Intent tag
72
+ * slots: `list`. List of slots
73
+ * Tag:`str`. tag to the slot
74
+ * Text:`str`. Text of the slot
75
+ * Start_char: `int`. First character of the span
76
+ * End_char: `int`. Last character of the span
77
+
78
+ #### Example
79
+
80
+
81
+ An example looks as follows:
82
+
83
+ ```
84
+ {
85
+ "example": "Demana una ambulància; la meva dona està de part.",
86
+ "annotation": {
87
+ "intent": "call_emergency",
88
+ "slots": [
89
+ {
90
+ "Tag": "service",
91
+ "Text": "ambulància",
92
+ "Start_char": 11,
93
+ "End_char": 21
94
+ },
95
+ {
96
+ "Tag": "situation",
97
+ "Text": "la meva dona està de part",
98
+ "Start_char": 23,
99
+ "End_char": 48
100
+ }
101
+ ]
102
+ }
103
+ },
104
+ ```
105
+
106
+
107
+ ### Data Splits
108
+
109
+ * NLUCat.train: 9128 examples
110
+ * NLUCat.dev: 1441 examples
111
+ * NLUCat.test: 1441 examples
112
+
113
+ ### Statistics
114
+
115
+ | | test | dev | train | Total |
116
+ |-|-|-|-|-|
117
+ | alarm_query | 14 | 9 | 68 | 91 |
118
+ | alarm_remove | 10 | 12 | 68 | 90 |
119
+ | alarm_set | 11 | 10 | 69 | 90 |
120
+ | app_end | 8 | 9 | 43 | 60 |
121
+ | app_launch | 9 | 7 | 47 | 63 |
122
+ | audio_volume_down | 15 | 16 | 105 | 136 |
123
+ | audio_volume_mute | 8 | 9 | 62 | 79 |
124
+ | audio_volume_up | 14 | 16 | 101 | 131 |
125
+ | book restaurant | 31 | 27 | 182 | 240 |
126
+ | calendar_query | 34 | 38 | 227 | 299 |
127
+ | calendar_remove | 31 | 33 | 211 | 275 |
128
+ | calendar_set | 50 | 53 | 340 | 443 |
129
+ | call_emergency | 14 | 18 | 111 | 143 |
130
+ | call_medicalService | 14 | 11 | 70 | 95 |
131
+ | call_person | 23 | 18 | 116 | 157 |
132
+ | call_service | 6 | 9 | 45 | 60 |
133
+ | compare_places | 6 | 7 | 47 | 60 |
134
+ | contact_add | 20 | 22 | 138 | 180 |
135
+ | contact_query | 16 | 16 | 89 | 121 |
136
+ | cooking_query | 13 | 12 | 65 | 90 |
137
+ | cooking_recipe | 9 | 10 | 74 | 93 |
138
+ | datetime_convert | 14 | 14 | 95 | 123 |
139
+ | datetime_query | 18 | 17 | 112 | 147 |
140
+ | general_affirm | 6 | 6 | 18 | 30 |
141
+ | general_commandstop | 13 | 13 | 75 | 101 |
142
+ | general_confirm | 6 | 6 | 48 | 60 |
143
+ | general_dontcare | 8 | 6 | 46 | 60 |
144
+ | general_explain | 5 | 5 | 7 | 17 |
145
+ | general_greet | 13 | 10 | 67 | 90 |
146
+ | general_joke | 10 | 11 | 69 | 90 |
147
+ | general_negate | 12 | 9 | 69 | 90 |
148
+ | general_praise | 15 | 10 | 65 | 90 |
149
+ | general_quirky | 15 | 14 | 99 | 128 |
150
+ | general_repeat | 11 | 14 | 65 | 90 |
151
+ | generat_explain | 8 | 7 | 58 | 73 |
152
+ | iot_cleaning | 11 | 9 | 70 | 90 |
153
+ | iot_coffee | 10 | 12 | 68 | 90 |
154
+ | iot_hue_lightchange | 9 | 12 | 69 | 90 |
155
+ | iot_hue_lightdim | 14 | 12 | 64 | 90 |
156
+ | iot_hue_lightoff | 10 | 11 | 70 | 91 |
157
+ | iot_hue_lighton | 11 | 14 | 66 | 91 |
158
+ | iot_hue_lightup | 10 | 9 | 70 | 89 |
159
+ | iot_wemo_off | 11 | 13 | 65 | 89 |
160
+ | iot_wemo_on | 6 | 8 | 46 | 60 |
161
+ | lists_createoradd | 19 | 16 | 115 | 150 |
162
+ | lists_query | 15 | 15 | 92 | 122 |
163
+ | lists_remove | 14 | 14 | 91 | 119 |
164
+ | medReminder_query | 18 | 17 | 108 | 143 |
165
+ | medReminder_set | 17 | 17 | 113 | 147 |
166
+ | medicalAppointment_query | 20 | 19 | 114 | 153 |
167
+ | medicalAppointment_set | 24 | 23 | 165 | 212 |
168
+ | menu_query | 15 | 17 | 113 | 145 |
169
+ | message_query | 21 | 20 | 140 | 181 |
170
+ | message_send | 26 | 24 | 162 | 212 |
171
+ | music_dislikeness | 10 | 9 | 69 | 88 |
172
+ | music_likeness | 11 | 9 | 71 | 91 |
173
+ | music_query | 22 | 23 | 135 | 180 |
174
+ | music_settings | 9 | 9 | 63 | 81 |
175
+ | news_query | 19 | 22 | 149 | 190 |
176
+ | play_audiobook | 12 | 15 | 93 | 120 |
177
+ | play_game | 12 | 11 | 67 | 90 |
178
+ | play_music | 41 | 45 | 271 | 357 |
179
+ | play_podcasts | 20 | 19 | 121 | 160 |
180
+ | play_radio | 20 | 20 | 115 | 155 |
181
+ | play_video | 15 | 15 | 90 | 120 |
182
+ | qa_currency | 12 | 9 | 69 | 90 |
183
+ | qa_definition | 19 | 23 | 147 | 189 |
184
+ | qa_factoid | 26 | 24 | 143 | 193 |
185
+ | qa_maths | 13 | 12 | 95 | 120 |
186
+ | qa_medicalService | 20 | 21 | 117 | 158 |
187
+ | qa_procedures | 36 | 33 | 220 | 289 |
188
+ | qa_service | 16 | 18 | 112 | 146 |
189
+ | qa_sports | 9 | 9 | 72 | 90 |
190
+ | qa_stock | 13 | 10 | 67 | 90 |
191
+ | recommendation_events | 22 | 22 | 143 | 187 |
192
+ | recommendation_locations | 23 | 24 | 157 | 204 |
193
+ | recommendation_movies | 18 | 23 | 139 | 180 |
194
+ | share_currentLocation | 15 | 13 | 92 | 120 |
195
+ | social_post | 19 | 20 | 112 | 151 |
196
+ | social_query | 14 | 14 | 96 | 124 |
197
+ | takeaway_order | 20 | 25 | 135 | 180 |
198
+ | takeaway_query | 7 | 9 | 50 | 66 |
199
+ | transport_directions | 28 | 24 | 181 | 233 |
200
+ | transport_query | 31 | 31 | 185 | 247 |
201
+ | transport_taxi | 26 | 22 | 132 | 180 |
202
+ | transport_ticket | 25 | 25 | 160 | 210 |
203
+ | transport_traffic | 15 | 17 | 88 | 120 |
204
+ | weather_query | 31 | 29 | 189 | 249 |
205
+ | *Total* | *1440* | *1440* | *9117* | *11997* |
206
+
207
+
208
+
209
+ ## Dataset Creation
210
+
211
+ ### Curation Rationale
212
+
213
+ We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
214
+
215
+ When creating this dataset, we took into account not only the language but the entire socio-cultural reality of the Catalan-speaking population. Special consideration was also given to the needs of the vulnerable population.
216
+
217
+ ### Source Data
218
+
219
+ #### Initial Data Collection and Normalization
220
+
221
+ We commissioned a company to create fictitious examples for the creation of this dataset.
222
+
223
+ #### Who are the source language producers?
224
+
225
+ We commissioned the writing of the examples to the company [m47 labs](https://www.m47labs.com/).
226
+
227
+ ### Annotations
228
+
229
+ #### Annotation process
230
+
231
+ The elaboration of this dataset has been done in three steps, taking as a model the process followed by the [NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data) dataset, as explained in the [paper](https://arxiv.org/abs/1903.05566).
232
+ * First step: translation or elaboration of the instructions given to the annotators to write the examples.
233
+ * Second step: writing the examples. This step also includes the grammatical correction and normalization of the texts.
234
+ * Third step: recording the attempts and the slots of each example. In this step, some modifications were made to the annotation guides to adjust them to the real situations.
235
+
236
+ #### Who are the annotators?
237
+
238
+ The drafting of the examples and their annotation was entrusted to the company [m47 labs](https://www.m47labs.com/) through a public tender process.
239
+
240
+ ### Personal and Sensitive Information
241
+
242
+ No personal or sensitive information included.
243
+
244
+ The examples used for the preparation of this dataset are fictitious and, therefore, the information shown is not real.
245
+
246
+ ## Considerations for Using the Data
247
+
248
+ ### Social Impact of Dataset
249
+
250
+ We hope that this dataset will help the development of virtual assistants in Catalan, a language that is often not taken into account, and that it will especially help to improve the quality of life of people with special needs.
251
+
252
+ ### Discussion of Biases
253
+
254
+ When writing the examples, the annotators were asked to take into account the socio-cultural reality (geographic points, artists and cultural references, etc.) of the Catalan-speaking population.
255
+ Likewise, they were asked to be careful to avoid examples that reinforce the stereotypes that exist in this society. For example: be careful with the gender or origin of personal names that are associated with certain activities.
256
+
257
+ ### Other Known Limitations
258
+
259
+ [N/A]
260
+
261
+ ## Additional Information
262
+
263
+ ### Dataset Curators
264
+
265
+ Language Technologies Unit at the Barcelona Supercomputing Center ([email protected])
266
+
267
+ This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
268
+
269
+
270
+ ### Licensing Information
271
+
272
+ This dataset can be used for any purpose, whether academic or commercial, under the terms of the [CC BY 4.0]((https://creativecommons.org/licenses/by/4.0/)).
273
+ Give appropriate credit , provide a link to the license, and indicate if changes were made.
274
+
275
+ ### Citation Information
276
+
277
+ [DOI](https://doi.org/10.5281/zenodo.10362026)
278
+
279
+ ### Contributions
280
+
281
+ The drafting of the examples and their annotation was entrusted to the company [m47 labs](https://www.m47labs.com/) through a public tender process.
dev.json ADDED
The diff for this file is too large to render. See raw diff
 
test.json ADDED
The diff for this file is too large to render. See raw diff
 
train.json ADDED
The diff for this file is too large to render. See raw diff