--- inference: false language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # BERT BASE (cased) finetuned on Bulgarian named-entity-recognition data Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it does make a difference between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/). It was finetuned on public named-entity-recognition Bulgarian data. Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925). ### How to use Here is how to use this model in PyTorch: ```python >>> from transformers import pipeline >>> >>> model = pipeline( >>> 'ner', >>> model='rmihaylov/bert-base-ner-theseus-bg', >>> tokenizer='rmihaylov/bert-base-ner-theseus-bg', >>> device=0, >>> revision=None) >>> output = model('Здравей, аз се казвам Иван.') >>> print(output) [{'end': 26, 'entity': 'B-PER', 'index': 6, 'score': 0.9937722, 'start': 21, 'word': '▁Иван'}] ```