escorpius / README.md
asier-gutierrez's picture
Update README.md
392259e
|
raw
history blame
No virus
2.42 kB
metadata
license: cc-by-nc-nd-4.0

esCorpius: A Massive Spanish Crawling Corpus

Introduction

In the recent years, transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, the results in Spanish present important shortcomings, as they are either too small in comparison with other languages, or present a low quality derived from sub-optimal cleaning and deduplication. In this paper, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius has been released under CC BY-NC-ND 4.0 license.

Statistics

Corpus Size (ES) Docs (ES) Words (ES) Lang. Lang.
identifier
Deduplication License
OSCAR
22.01
381.9 GB 51M 42,829M Multi fastText No CC-BY-4.0
mC4 1,600.0 GB 416M 433,000M Multi CLD3 Line ODC-By-v1.0
CC-100 53.3 GB - 9,374M Multi fastText - Common
Crawl
ParaCrawl
v9
24.0 GB - 4,374M Multi CLD2 Bicleaner CC0
esCorpius
(ours)
322.5 GB 104M 50,773M ES CLD2
fastText
dLHF CC-BY-NC-ND

Citation