m2m100_418M_ft_ru-kbd_50K
This model is a fine-tuned version of facebook/m2m100_1.2B on the anzorq/ru-kbd dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Eval
predict_bleu = 23.3736
predict_gen_len = 16.8114
predict_loss = 0.9729
predict_runtime = 0:03:29.00
predict_samples = 1034
predict_samples_per_second = 4.947
predict_steps_per_second = 0.211
Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
Inference
pip install transformers sentencepiece
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_path = "anzorq/m2m100_1.2B_ft_ru-kbd_50K"
tgt_lang="zu"
tokenizer = AutoTokenizer.from_pretrained('facebook/m2m100_1.2B')
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)
model.to('cuda')
def translate(text, num_beams=4, num_return_sequences=4):
inputs = tokenizer(text, return_tensors="pt")
inputs.to('cuda')
num_return_sequences = min(num_return_sequences, num_beams)
translated_tokens = model.generate(
**inputs,
forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],
num_beams=num_beams,
num_return_sequences=num_return_sequences
)
translations = [tokenizer.decode(translation, skip_special_tokens=True) for translation in translated_tokens]
return translations
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for anzorq/m2m100_1.2B_ft_ru-kbd_50K
Base model
facebook/m2m100_1.2B