juletxara commited on
Commit
4f8b771
1 Parent(s): c4102ec

add script and readme

Browse files
Files changed (2) hide show
  1. README.md +187 -1
  2. euscrawl.py +99 -0
README.md CHANGED
@@ -1,3 +1,189 @@
1
  ---
2
- license: cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language:
5
+ - eu
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: EusCrawl
13
+ size_categories:
14
+ - 10M<n<100M
15
+ source_datasets:
16
+ - original
17
+ tags:
18
+ - high-quality
19
+ - scraping
20
+ task_categories:
21
+ - text-generation
22
+ - fill-mask
23
+ task_ids:
24
+ - language-modeling
25
+ - masked-language-modeling
26
+ dataset_info:
27
+ features:
28
+ - name: id
29
+ dtype: int32
30
+ - name: title
31
+ dtype: string
32
+ - name: text
33
+ dtype: string
34
+ - name: source
35
+ dtype: string
36
+ - name: license
37
+ dtype: string
38
+ - name: url
39
+ dtype: string
40
+ splits:
41
+ - name: train
42
+ num_bytes: 2314407002
43
+ num_examples: 1724544
44
+ download_size: 728281801
45
+ dataset_size: 2314407002
46
  ---
47
+
48
+ # Dataset Card for EusCrawl
49
+
50
+ ## Table of Contents
51
+ - [Table of Contents](#table-of-contents)
52
+ - [Dataset Description](#dataset-description)
53
+ - [Dataset Summary](#dataset-summary)
54
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
55
+ - [Languages](#languages)
56
+ - [Dataset Structure](#dataset-structure)
57
+ - [Data Instances](#data-instances)
58
+ - [Data Fields](#data-fields)
59
+ - [Data Splits](#data-splits)
60
+ - [Dataset Creation](#dataset-creation)
61
+ - [Curation Rationale](#curation-rationale)
62
+ - [Source Data](#source-data)
63
+ - [Annotations](#annotations)
64
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
65
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
66
+ - [Social Impact of Dataset](#social-impact-of-dataset)
67
+ - [Discussion of Biases](#discussion-of-biases)
68
+ - [Other Known Limitations](#other-known-limitations)
69
+ - [Additional Information](#additional-information)
70
+ - [Dataset Curators](#dataset-curators)
71
+ - [Licensing Information](#licensing-information)
72
+ - [Citation Information](#citation-information)
73
+ - [Contributions](#contributions)
74
+
75
+ ## Dataset Description
76
+
77
+ - **Homepage:** https://ixa.ehu.eus/euscrawl/
78
+ - **Repository:**
79
+ - **Paper:** https://arxiv.org/abs/2203.08111
80
+ - **Leaderboard:**
81
+ - **Point of Contact:** [email protected]
82
+
83
+ ### Dataset Summary
84
+
85
+ EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for
86
+ Basque comprising 12.5 million documents and 423 million tokens,
87
+ totalling 2.1 GiB of uncompressed text. EusCrawl was built using
88
+ ad-hoc scrapers to extract text from 33 Basque websites with
89
+ high-quality content, resulting in cleaner text compared to general
90
+ purpose approaches.
91
+
92
+ ### Supported Tasks and Leaderboards
93
+
94
+ EusCrawlis intended for pretarining models for language modeling or masked language modeling.
95
+
96
+ ### Languages
97
+
98
+ Basque (eu)
99
+
100
+ ## Dataset Structure
101
+
102
+ ### Data Instances
103
+
104
+ [More Information Needed]
105
+
106
+ ### Data Fields
107
+
108
+ [More Information Needed]
109
+
110
+ ### Data Splits
111
+
112
+ [More Information Needed]
113
+
114
+ ## Dataset Creation
115
+
116
+ ### Curation Rationale
117
+
118
+ [More Information Needed]
119
+
120
+ ### Source Data
121
+
122
+ #### Initial Data Collection and Normalization
123
+
124
+ [More Information Needed]
125
+
126
+ #### Who are the source language producers?
127
+
128
+ [More Information Needed]
129
+
130
+ ### Annotations
131
+
132
+ #### Annotation process
133
+
134
+ [More Information Needed]
135
+
136
+ #### Who are the annotators?
137
+
138
+ [More Information Needed]
139
+
140
+ ### Personal and Sensitive Information
141
+
142
+ [More Information Needed]
143
+
144
+ ## Considerations for Using the Data
145
+
146
+ ### Social Impact of Dataset
147
+
148
+ [More Information Needed]
149
+
150
+ ### Discussion of Biases
151
+
152
+ [More Information Needed]
153
+
154
+ ### Other Known Limitations
155
+
156
+ [More Information Needed]
157
+
158
+ ## Additional Information
159
+
160
+ ### Dataset Curators
161
+
162
+ [More Information Needed]
163
+
164
+ ### Licensing Information
165
+
166
+ We do not claim ownership of any document in the corpus. All documents
167
+ we collected were published under a Creative Commons license in their
168
+ original website, and the specific variant can be found in the
169
+ "license" field of each document. Should you consider
170
+ that our data contains material that is owned by you and you would not
171
+ like to be reproduced here, please contact Aitor Soroa at
172
173
+
174
+ ### Citation Information
175
+
176
+ If you use our corpus or models for academic research, please cite the paper in question:
177
+ @misc{artetxe2022euscrawl,
178
+ title={Does corpus quality really matter for low-resource languages?},
179
+ author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
180
+ Olatz Perez-de-Viñaspre, Aitor Soroa},
181
+ year={2022},
182
+ eprint={2203.08111},
183
+ archivePrefix={arXiv},
184
+ primaryClass={cs.CL}
185
+ }
186
+
187
+ ### Contributions
188
+
189
+ Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
euscrawl.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """EusCrawl dataset."""
2
+
3
+ import json
4
+ import datasets
5
+
6
+
7
+ _DESCRIPTION = """\
8
+ EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for
9
+ Basque comprising 12.5 million documents and 423 million tokens,
10
+ totalling 2.1 GiB of uncompressed text. EusCrawl was built using
11
+ ad-hoc scrapers to extract text from 33 Basque websites with
12
+ high-quality content, resulting in cleaner text compared to general
13
+ purpose approaches.
14
+
15
+ We do not claim ownership of any document in the corpus. All documents
16
+ we collected were published under a Creative Commons license in their
17
+ original website, and the specific variant can be found in the
18
+ "license" field of each document. Should you consider
19
+ that our data contains material that is owned by you and you would not
20
+ like to be reproduced here, please contact Aitor Soroa at
21
22
+
23
+ For more details about the corpus, refer to our paper "Artetxe M.,
24
+ Aldabe I., Agerri R., Perez-de-Viñaspre O, Soroa A. (2022). Does
25
+ Corpus Quality Really Matter for Low-Resource Languages?"
26
+ https://arxiv.org/abs/2203.08111
27
+
28
+ If you use our corpus or models for academic research, please cite the paper in question:
29
+ @misc{artetxe2022euscrawl,
30
+ title={Does corpus quality really matter for low-resource languages?},
31
+ author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa},
32
+ year={2022},
33
+ eprint={2203.08111},
34
+ archivePrefix={arXiv},
35
+ primaryClass={cs.CL}
36
+ }
37
+
38
+ For questions please contact Aitor Soroa at [email protected].
39
+ """
40
+ _HOMEPAGE_URL = "https://ixa.ehu.eus/euscrawl/"
41
+ _CITATION = """\
42
+ @misc{artetxe2022euscrawl,
43
+ title={Does corpus quality really matter for low-resource languages?},
44
+ author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
45
+ Olatz Perez-de-Viñaspre, Aitor Soroa},
46
+ year={2022},
47
+ eprint={2203.08111},
48
+ archivePrefix={arXiv},
49
+ primaryClass={cs.CL}
50
+ }
51
+ """
52
+
53
+ _URL = "http://ixa.ehu.eus/euscrawl/files/euscrawl-v1-free-jsonl.tar.bz2"
54
+ _FILEPATH = "euscrawl-v1-free-jsonl/euscrawl-v1.free.jsonl"
55
+
56
+
57
+ class EusCrawl(datasets.GeneratorBasedBuilder):
58
+ def _info(self):
59
+ return datasets.DatasetInfo(
60
+ description=_DESCRIPTION,
61
+ features=datasets.Features(
62
+ {
63
+ "id": datasets.Value("int32"),
64
+ "title": datasets.Value("string"),
65
+ "text": datasets.Value("string"),
66
+ "source": datasets.Value("string"),
67
+ "license": datasets.Value("string"),
68
+ "url": datasets.Value("string"),
69
+ },
70
+ ),
71
+ supervised_keys=None,
72
+ homepage=_HOMEPAGE_URL,
73
+ citation=_CITATION,
74
+ )
75
+
76
+ def _split_generators(self, dl_manager):
77
+ path = dl_manager.download_and_extract(_URL)
78
+ filepath = f"{path}/{_FILEPATH}"
79
+ return [
80
+ datasets.SplitGenerator(
81
+ name=datasets.Split.TRAIN,
82
+ gen_kwargs={"filepath": filepath},
83
+ )
84
+ ]
85
+
86
+ def _generate_examples(self, filepath):
87
+ with open(filepath, encoding="utf-8") as f:
88
+ for id, line in enumerate(f):
89
+ data = json.loads(line)
90
+
91
+ # defaut to empty string if field is missing
92
+ yield id, {
93
+ "id": id,
94
+ "title": data.get("title", ""),
95
+ "text": data.get("text", ""),
96
+ "source": data.get("source", ""),
97
+ "license": data.get("license", ""),
98
+ "url": data.get("url", ""),
99
+ }