File size: 7,387 Bytes
416c403
 
93d6e2e
 
 
 
 
8799df7
 
aadb6c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
02698d2
 
 
aadb6c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
---
license: unknown
task_categories:
- text-classification
language:
- nl
pretty_name: Dutch CoLA
tag:
- croissant
---

**Dutch CoLA** is a **c**orpus **o**f **l**inguistic **a**cceptability for Dutch: a dataset consisting of sentences in Dutch, each marked as either acceptable (class 1) or unacceptable (class 0). These sentences are collected from existing descriptions of Dutch grammar (see sources below) with expert-annotated acceptability labels.

Dutch CoLA is part of the group project by students of BA Information Science program at the University of Groningen. List of people involved (alphabetic order):



* Abdi, Silvana
* Brouwer, Hylke
* Elzinga, Martine
* Gunput, Shenza
* Huisman, Sem
* Krooneman, Collin
* Poot, David
* Top, Jelmer
* Weideman, Cain
* Lisa Bylinina (supervisor)

The dataset format roughly follows that of [English CoLA](https://nyu-mll.github.io/CoLA/) and contains the following fields:



1. **Source** of the example (encoded as defined below)
2. **Original ID:** example number in the original source (encoded as defined below)
3. **Acceptability**: 0 (unacceptable) or 1 (acceptable)
4. **Original annotation**: acceptability label of the sentence in the original source (can be empty, ‘*’, ‘??’, ‘?’ etc.)
5. **Sentence**: the actual sentence. It might appear just like in the source, or can have some linguistic notation removed and/or some material added to complete the example to a full sentence.
6. **Material added**: 0 (if the original example didn’t have to be completed to be a full sentence) or 1 (if some material was added compared to the example in the source to make it a full sentence)

The dataset is split into 4 subsets:



* **Train** (train.csv): 19925 rows (unbalanced)
* **Validation** (val.csv): 2400 rows (balanced)
* **Test** (test.csv): 2400 rows (balanced)
* **Intermediate** (intermediate.csv): examples with intermediate original acceptability labels (‘?’ and ‘(?)’), the ‘Acceptability’ field contains 0 for all of them. 1200 rows

Legend for source encoding:


<table>
<tr>
<td><strong>Source code</strong>
</td>
<td><strong>Source</strong>
</td>
</tr>
<tr>
<td>SoD-Zw
</td>
<td>Zwart, J. W. (2011). <em>The syntax of Dutch</em>. Cambridge University Press.
</td>
</tr>
<tr>
<td>SoD-Noun1
</td>
<td>Keizer, E., & Broekhuis, H. (2012). <em>Syntax of Dutch: Nouns and Noun Phrases. Volume 1</em>. Amsterdam University Press.
</td>
</tr>
<tr>
<td>SoD-Noun2
</td>
<td>Dikken, M. D., & Broekhuis, H. (2012). <em>Syntax of Dutch: Nouns and Noun Phrases-Volume 2</em>. Amsterdam University Press.
</td>
</tr>
<tr>
<td>SoD-Adj
</td>
<td>Broekhuis, H. (2013). <em>Syntax of Dutch: Adjectives and adjective phrases</em>. Amsterdam University Press.
</td>
</tr>
<tr>
<td>SoD-Adp
</td>
<td>Broekhuis, H. (2013). <em>Syntax of Dutch: Adpositions and adpositional phrases</em>. Amsterdam University Press.
</td>
</tr>
<tr>
<td>SoD-Verb1
</td>
<td>Vos, R., Broekhuis, H., & Corver, N. (2015). <em>Syntax of Dutch: verbs and Verb Phrases. Volume 1</em>. Amsterdam University Press.
</td>
</tr>
<tr>
<td>SoD-Verb2
</td>
<td>Broekhuis, H., & Corver, N. (2015). <em>Syntax of Dutch: verbs and Verb Phrases. Volume 2</em>. Amsterdam University Press.
</td>
</tr>
<tr>
<td>SoD-Verb3
</td>
<td>Broekhuis, H., & Corver, N. (2016). <em>Syntax of Dutch: Verbs and Verb Phrases. Volume 3</em>. Amsterdam University Press.
</td>
</tr>
<tr>
<td>SoD-Coord
</td>
<td>Broekhuis, H., & Corver, N. (2019). <em>Syntax of Dutch: Coordination and Ellipsis</em>. Amsterdam University Press.
</td>
</tr>
</table>


General guidelines that were followed:



* The corpus contains sentences in Dutch, sentences are labelled 0 (“not acceptable”) and 1 (“acceptable”). These labels correspond to the original judgments in the sources:
  * **0**: The original acceptability label was *, ?*, ??
    * We mark original labels ? and (?) as 0 but they are later split off to a separate file;
  * **1**: The original acceptability label was empty.
* We don’t collect examples that are marked as #, % and $.
* We ignore sentences that are marked as dialectal or colloquial or otherwise not standard Dutch. We don’t record them at all. Helas!
* We aim to collect **full sentences**. If an example in the source is not a full sentence, but a noun phrase or some other fragment (_three dogs_ or smth like _…that we called him_), we make it into a full sentence in the most neutral possible way, and mark this fact in a separate column.
* We need only plain text and only the simple conventional way of writing down sentences. This means we remove **boldface**, _italics_, <span style="text-decoration:underline;">underlining</span> etc.
* If the example contains translation into English, morpheme-by-morpheme glosses etc., we don’t include any of this – just the actual example sentence in Dutch. Sometimes, the example has morphemes separated from each other with a dash – we remove these dashes too. Example:

		Tasman	heeft		Nieuw Zeeland	ontdek-t

		Tasman	have:3SG	New Zealand		GE:discover-D

		‘Tasman discovered New Zealand.’

We record this sentence as ‘Tasman heeft Nieuw Zeeland ontdekt.’


* We remove constituency brackets and other linguistic annotation -- we just keep plain Dutch!
* If the example in the source uses shortcuts to write more than one Dutch sentence in a compact way, with parentheses, slashes etc., we expand this notation and end up with more than one sentence. Example:

        Jan laat [Marie (*te) vertrekken].

We record this as two sentences:

        ‘Jan laat Marie vertrekken’ – with acceptability label 1

        ‘Jan laat Marie te vertrekken’ – with acceptability label 0

Another example:

        {Dat / *dit} wist ik niet.

This is recorded as two sentences:

        ‘Dat wist ik niet.’ – with acceptability label 1

        ‘Dit wist ik niet.’ – with acceptability label 0

Yet another example:

        Jan stuurt <de koningin> een verzoekschrift <aan de koningin>.

This is recorded as two sentences:

        ‘Jan stuurt de koningin een verzoekschrift’

        ‘Jan stuurt een verzoekschrift aan de koningin’

* We ignore empty words that are added to the example as a means to make explicit its hidden grammatical structure. In the following example, PRO is a sort of unpronounced, silent, pronoun:

        Jan probeert [PRO morgen te komen].

We record this sentence as ‘Jan probeert morgen te komen’, without ‘PRO’ (or square brackets, for that matter!).

* If a sentence is indicated as acceptable under some of its potential readings, we record it as acceptable.

        a. *Slaap ze! (sleep them) b. Slaap ze! (sleep well)

This sentence is recorded with acceptability label 1.


* **Example numbering**: We keep track of the number of the example in the original source. SoD-Zw has example numbers that track the chapter number. We just keep them the way they are. Other sources (SoD-Coord, for instance) restart example numbers from 1 in each chapter. In this case, we prepose chapter number to the original ID. Example 2 then can become 2.2 if it comes from Chapter 2.



The sentences in this dataset are extracted from the published works listed above, and copyright (where applicable) remains with the original authors or publishers. We expect that research use is legal, but make no guarantee of this.