File size: 2,977 Bytes
b2743cf a0ff89e 788d855 b418b0a f7c5de7 42ce227 ee467f0 42ce227 c41f4fb b2743cf d9341c2 4c9de31 d9341c2 97afe40 d9341c2 6f278f9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
---
base_model: bert-base-multilingual-uncased
# model-index:
# - name: lang-recogn-model
# results:
# - task:
# type: text-classification
# dataset:
# name: language-detection
# type: language-detection
# metrics:
# - name: accuracy
# type: accuracy
# value: 0.9836
# source:
# name: Language recognition using BERT
# url: >-
# https://www.kaggle.com/code/sergeypolivin/language-recognition-using-bert
language:
- ar
- da
- nl
- en
- fr
- de
- el
- hi
- it
- kn
- ml
- pt
- ru
- es
- sv
- ta
- tr
pipeline_tag: text-classification
widget:
- text: "I have seen it somewhere..."
example_title: "English"
- text: "Ik heb het al gezien"
example_title: "Dutch"
- text: "Интересная идея"
example_title: "Russian"
- text: "Que vamos a hacer?"
example_title: "Spanish"
- text: "Hvor er der en pengeautomat?"
example_title: "Danish"
- text: "إنه مشوق جدا"
example_title: "Arabic"
- text: "Es ist sehr interessant"
example_title: "German"
- text: "c'est très intéressant"
example_title: "French"
- text: "Non ho mai visto una tale bellezza"
example_title: "Italian"
- text: "Jag har aldrig sett en sådan skönhet"
example_title: "Swedish"
- text: "Böyle bir güzellik görmedim"
example_title: "Turkish"
- text: "ಅದ್ಭುತ ಕಲ್ಪನೆ"
example_title: "Kannada"
- text: "அற்புதமான யோசனை"
example_title: "Tamil"
- text: "Υπέροχη ιδέα"
example_title: "Greek"
- text: "Eu nunca estive aqui"
example_title: "Portugeese"
- text: "मैं यहां कभी नहीं गया"
example_title: "Hindi"
- text: "ഞാൻ ഇവിടെ പോയിട്ടില്ല"
example_title: "Malayam"
license: mit
---
# Language Detection Model
The model presented in the following repository represents a fine-tuned version of `BertForSequenceClassification`
pretrained on [multilingual texts](https://huggingface.co/bert-base-multilingual-uncased).
## Training/fine-tuning
The model has been fine-tuned based on [Language Detection](https://www.kaggle.com/datasets/basilb2s/language-detection)
dataset found on *Kaggle*. The entire process of the dataset analysis as well as a complete description of the training procedure
can be found in [one of my *Kaggle* notebooks](https://www.kaggle.com/code/sergeypolivin/language-recognition-using-bert)
which has been used for the purpose of a faster model training on *GPU*.
## Supported languages
The model has been fine-tuned to detect one of the following 17 languages:
- Arabic
- Danish
- Dutch
- English
- French
- German
- Greek
- Hindi
- Italian
- Kannada
- Malayalam
- Portugeese
- Russian
- Spanish
- Sweedish
- Tamil
- Turkish
## References
1. [BERT multilingual base model (uncased)](https://huggingface.co/bert-base-multilingual-uncased)
2. [Language Detection Dataset](https://www.kaggle.com/datasets/basilb2s/language-detection) |