metadata
base_model: bert-base-multilingual-uncased
Language Detection Model
The model presented in the following repository represents a fine-tuned version of BertModelForSequenceClassification
pretrained on multilingual texts.
Training/fine-tuning
The model has been fine-tuned based on Language Detection dataset found on Kaggle. The entire process of the dataset analysis as well as a complete description of the training procedure can be found in one of my Kaggle notebooks which has been used for the purpose of a faster model training on GPU.