ELECTRICIDAD: The Spanish Electra Imgur
Electricidad-base-discriminator (uncased) is a base
Electra like model (discriminator in this case) trained on a Large Spanish Corpus (aka BETO's corpus)
As mentioned in the original paper: ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.
For a detailed description and experimental results, please refer the paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
Model details β
Name | # Value |
---|---|
Layers | 12 |
Hidden | 768 |
Params | 110M |
Evaluation metrics (for discriminator) π§Ύ
Metric | # Score |
---|---|
Accuracy | 0.985 |
Precision | 0.726 |
AUC | 0.922 |
Fast example of usage π
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("mrm8488/electricidad-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/electricidad-base-discriminator")
sentence = "El rΓ‘pido zorro marrΓ³n salta sobre el perro perezoso"
fake_sentence = "El rΓ‘pido zorro marrΓ³n amar sobre el perro perezoso"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % prediction, end="") for prediction in predictions.tolist()]
# Output:
'''
el rapido zorro marro ##n amar sobre el perro pere ##zoso 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0[None, None, None, None, None, None, None, None, None, None, None, None, None
'''
As you can see there are 1s in the places where the model detected a fake token. So, it works! π
Some models fine-tuned on a downstream task π οΈ
Spanish LM model comparison π
Dataset | Metric | RoBERTa-b | RoBERTa-l | BETO | mBERT | BERTIN | Electricidad-b |
---|---|---|---|---|---|---|---|
UD-POS | F1 | 0.9907 | 0.9901 | 0.9900 | 0.9886 | 0.9904 | 0.9818 |
Conll-NER | F1 | 0.8851 | 0.8772 | 0.8759 | 0.8691 | 0.8627 | 0.7954 |
Capitel-POS | F1 | 0.9846 | 0.9851 | 0.9836 | 0.9839 | 0.9826 | 0.9816 |
Capitel-NER | F1 | 0.8959 | 0.8998 | 0.8771 | 0.8810 | 0.8741 | 0.8035 |
STS | Combined | 0.8423 | 0.8420 | 0.8216 | 0.8249 | 0.7822 | 0.8065 |
MLDoc | Accuracy | 0.9595 | 0.9600 | 0.9650 | 0.9560 | 0.9673 | 0.9490 |
PAWS-X | F1 | 0.9035 | 0.9000 | 0.8915 | 0.9020 | 0.8820 | 0.9045 |
XNLI | Accuracy | 0.8016 | 0.7958 | 0.8130 | 0.7876 | 0.7864 | 0.7878 |
Acknowledgments
I thank π€/transformers team for allowing me to train the model (specially to Julien Chaumond).
Citation
If you want to cite this model you can use this:
@misc{mromero2020electricidad-base-discriminator,
title={Spanish Electra by Manuel Romero},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/electricidad-base-discriminator/}},
year={2020}
}
Created by Manuel Romero/@mrm8488
Made with β₯ in Spain
- Downloads last month
- 17