File size: 9,519 Bytes
fe621e1
6efed62
 
 
 
fe621e1
6efed62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe621e1
6efed62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
---
language:

- ca

license: apache-2.0

tags:

- "catalan"

- "masked-lm"

- "RoBERTa-large-ca"

- "CaText"

- "Catalan Textual Corpus"

widget:
- text: "El Català és una llengua molt <mask>."
- text: "Salvador Dalí va viure a <mask>."
- text: "La Costa Brava té les millors <mask> d'Espanya."
- text: "El cacaolat és un batut de <mask>."
- text: "<mask> és la capital de la Garrotxa."
- text: "Vaig al <mask> a buscar bolets."
- text: "Antoni Gaudí vas ser un <mask> molt important per la ciutat."
- text: "Catalunya és una referència en <mask> a nivell europeu."

---

# Catalan BERTa (roberta-large-ca) large model

## Table of Contents
<details>
<summary>Click to expand</summary>

- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
  - [Training Data](#training-data)
  - [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
   - [CLUB Benchmark](#club-benchmark)
   - [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)

</details>

## Model description

The **roberta-large-ca** is a transformer-based masked language model for the Catalan language. 
It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) large model 
and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.

## Intended Uses and Limitations

**roberta-large-ca** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section).
However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.

## How to Use

Here is how to use this model:

```python
from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer, FillMaskPipeline
from pprint import pprint
tokenizer_hf = AutoTokenizer.from_pretrained('projecte-aina/roberta-large-ca')
model = AutoModelForMaskedLM.from_pretrained('projecte-aina/roberta-large-ca')
model.eval()
pipeline = FillMaskPipeline(model, tokenizer_hf)
text = f"Em dic <mask>."
res_hf = pipeline(text)
pprint([r['token_str'] for r in res_hf])
```

## Training

### Training data

The training corpus consists of several corpora gathered from web crawling and public corpora.


| Corpus                  | Size in GB |
|-------------------------|------------|
| Catalan Crawling        | 13.00      |
| Wikipedia               | 1.10       |
| DOGC                    | 0.78       |
| Catalan Open Subtitles  | 0.02       |
| Catalan Oscar           | 4.00       |
| CaWaC                   | 3.60       |
| Cat. General Crawling   | 2.50       |
| Cat. Goverment Crawling | 0.24       |
| ACN                     | 0.42       |
| Padicat                 | 0.63       |
| RacoCatalá              | 8.10       |
| Nació Digital           | 0.42       |
| Vilaweb                 | 0.06       |
| Tweets                  | 0.02       |

### Training Procedure

The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. 
The RoBERTa-large pretraining consists of a masked language model training that follows the approach employed for the RoBERTa large model
with the same hyperparameters as in the original work.
The training lasted a total of 96 hours with 32 NVIDIA V100 GPUs of 16GB DDRAM.


## Evaluation

### CLUB Benchmark

The BERTa-large model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),
that has been created along with the model.

It contains the following tasks and their related datasets:

 1. Named Entity Recognition (NER)

    
    **[NER (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version,
    filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format


 2. Part-of-Speech Tagging (POS)
    
    **[POS (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus.

 3. Text Classification (TC)
     
    **[TeCla](https://huggingface.co/datasets/projecte-aina/tecla)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus, with 30 labels.

 4. Textual Entailment (TE)
     
    **[TE-ca](https://huggingface.co/datasets/projecte-aina/teca)**: consisting of 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction, or neutral), extracted from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus).

 5. Semantic Textual Similarity (STS)
    
    **[STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus).

 6. Question Answering (QA):
    
    **[VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad)**: contains 6,282 pairs of questions and answers, outsourced from 2095 Catalan language articles from VilaWeb newswire text.
    
    **[ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.
   
    **[CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa)**: an aggregation of 2 previous datasets (VilaQuAD and ViquiQuAD), 21,427 pairs of Q/A balanced by type of question, containing one question and one answer per context, although the contexts can repeat multiple times.
    
    **[XQuAD-ca](https://huggingface.co/datasets/projecte-aina/xquad-ca)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_.
    
Here are the train/dev/test splits of the datasets:

| Task (Dataset) | Total | Train | Dev  | Test |
|:--|:--|:--|:--|:--|
| NER (Ancora)  |13,581 | 10,628 | 1,427 | 1,526 |
| POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 |
| STS (STS-ca)         | 3,073 | 2,073 | 500 | 500 |
| TC (TeCla) |  137,775 | 110,203 | 13,786 |  13,786|
| TE (TE-ca) |  21,163 | 16,930 | 2,116 | 2,117
| QA (VilaQuAD) | 6,282  | 3,882  | 1,200  | 1,200 |
| QA (ViquiQuAD) | 14,239  | 11,255  | 1,492  | 1,429 |
| QA (CatalanQA) | 21,427  | 17,135  | 2,157  | 2,135 |

### Evaluation Results

| Task        | NER (F1)      | POS (F1)   | STS-ca (Comb)   | TeCla (Acc.) | TEca (Acc.) | VilaQuAD (F1/EM)| ViquiQuAD (F1/EM) | CatalanQA (F1/EM) | XQuAD-ca <sup>1</sup> (F1/EM) | 
| ------------|:-------------:| -----:|:------|:------|:-------|:------|:----|:----|:----|
| RoBERTa-large-ca        | **89.82** | **99.02** | **83.41** | **75.46** | **83.61** | **89.34**/75.50 | **89.20**/75.77 | **90.72/79.06** | **73.79**/55.34 |
| RoBERTa-base-ca-v2      | 89.29 | 98.96 | 79.07 | 74.26 | 83.14 | 87.74/72.58 | 88.72/**75.91** | 89.50/76.63 | 73.64/**55.42** |
| BERTa                   | 89.76 | 98.96 | 80.19 | 73.65 | 79.26 | 85.93/70.58 | 87.12/73.11 | 89.17/77.14 | 69.20/51.47 |
| mBERT                   | 86.87 | 98.83 | 74.26 | 69.90 | 74.63 | 82.78/67.33 | 86.89/73.53 | 86.90/74.19 | 68.79/50.80 |
| XLM-RoBERTa             | 86.31 | 98.89 | 61.61 | 70.14 | 33.30 | 86.29/71.83 | 86.88/73.11 | 88.17/75.93 | 72.55/54.16 |

<sup>1</sup> : Trained on CatalanQA, tested on XQuAD-ca.

## Licensing Information

[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)

## Citation Information 

If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
    title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
    author = "Armengol-Estap{\'e}, Jordi  and
      Carrino, Casimiro Pio  and
      Rodriguez-Penagos, Carlos  and
      de Gibert Bonet, Ona  and
      Armentano-Oller, Carme  and
      Gonzalez-Agirre, Aitor  and
      Melero, Maite  and
      Villegas, Marta",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-acl.437",
    doi = "10.18653/v1/2021.findings-acl.437",
    pages = "4933--4946",
}
```

### Funding

This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).

## Contributions

[N/A]