keybert-bg / README.md
auhide's picture
Create README.md
92e884b
|
raw
history blame
3.32 kB
---
inference: false
license: cc-by-4.0
datasets:
- wikiann
language:
- bg
metrics:
- accuracy
---
# 🇧🇬 BERT - Bulgarian Named Entity Recognition
KeyBERT-BG is a model trained for a keyword extraction task in Bulgarian.
## Usage
Import the libraries:
```python
import re
from typing import Dict
from pprint import pprint
from transformers import AutoTokenizer, AutoModelForTokenClassification
```
Firstly, you'll have to define this method, since the text preprocessing is custom and the standard `pipeline` method won't suffice:
```python
def get_keywords(
text: str,
model_id="auhide/keybert-bg",
max_len: int = 300,
id2group: Dict[int, str] = {
# Indicates that this is not a keyword.
0: "O",
# Begining of keyword.
1: "B-KWD",
# Additional keywords (might also indicate the end of a keyword sequence).
# You can merge these with the begining keyword `B-KWD`.
2: "I-KWD",
}
):
# Initialize the tokenizer and model.
tokenizer = AutoTokenizer.from_pretrained(model_id)
keybert = AutoModelForTokenClassification.from_pretrained(model_id)
# Preprocess the text.
# Surround punctuation with whitespace and convert multiple whitespaces
# into single ones.
text = re.sub(r"([,\.?!;:\'\"\(\)\[\]„”])", r" \1 ", text)
text = re.sub(r"\s+", r" ", text)
words = text.split()
# Tokenize the processed `text` (this includes padding or truncation).
tokens_data = tokenizer(
text.strip(),
padding="max_length",
max_length=max_len,
truncation=True,
return_tensors="pt"
)
input_ids = tokens_data.input_ids
attention_mask = tokens_data.attention_mask
# Predict the keywords.
out = keybert(input_ids, attention_mask=attention_mask).logits
# Softmax the last dimension so that the probabilities add up to 1.0.
out = out.softmax(-1)
# Based on the probabilities, generate the most probable keywords.
out_argmax = out.argmax(-1)
prediction = out_argmax.squeeze(0).tolist()
probabilities = out.squeeze(0)
return [
{
# Since the list of words does not have a [CLS] token, the index `i`
# is one step forward, which means that if we want to access the
# appropriate keyword we should use the index `i - 1`.
"entity": words[i - 1],
"entity_group": id2group[idx],
"score": float(probabilities[i, idx])
}
for i, idx in enumerate(prediction)
if idx == 1 or idx == 2
]
```
Choose a text and use the model on it. For example, I've chosen to use [this](https://www.24chasa.bg/bulgaria/article/14466321) article.
Then, you can call `get_keywords` on it and extract its keywords:
```python
# Reading the text from a file, since it is an article, and the text is large.
with open("input_text.txt", "r", encoding="utf-8") as f:
text = f.read()
keywords = get_keywords(text)
print("Keywords:")
pprint(keywords)
```
```sh
Keywords:
[{'entity': 'Пловдив', 'entity_group': 'B-KWD', 'score': 0.7669068574905396},
{'entity': 'Шофьорът', 'entity_group': 'B-KWD', 'score': 0.9119699597358704},
{'entity': 'катастрофа', 'entity_group': 'B-KWD', 'score': 0.8441269993782043}]
```