--- language: es tags: - Spanish - Electra - Legal datasets: - Spanish-legal-corpora --- ## LEGALECTRA โš–๏ธ **LEGALECTRA** (small) is an Electra like model (discriminator in this case) trained on [A collection of corpora of Spanish legal domain](https://zenodo.org/record/5495529#.YZItp3vMLJw). As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Training details The model was trained using the Electra base code for 3 days on 1 Tesla V100 16GB. ## Model details โš™ |Param| # Value| |-----|--------| |Layers| 12 | |Hidden | 256 | |Params| 14M | ## Evaluation metrics (for discriminator) ๐Ÿงพ |Metric | # Score | |-------|---------| |Accuracy| 0.955| |Precision| 0.790| |AUC | 0.971| ## Benchmarks ๐Ÿ”จ WIP ๐Ÿšง ## How to use the discriminator in `transformers` TBA ## Acknowledgments TBA ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2022legalectra, title={Spanish Legal Electra (small)}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/legalectra-small-spanish}, year={2022} } ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with in Spain