dariast commited on
Commit
cb9d01c
1 Parent(s): f8c570f

create README.md

Browse files

# PRISM Model for Multilingual Machine Translation

This repository contains the `Prism` model, a state-of-the-art multilingual neural machine translation (NMT) system developed for translation and machine translation evaluation tasks. The Prism model supports translation across 39 languages, leveraging a zero-shot paraphrasing approach that does not require human judgments for training.

The model was trained with a focus on multilingual performance, excelling in tasks such as translation quality estimation and evaluation, making it a versatile choice for research and practical use in various language pairs.

## Model Description
The `Prism` model was designed to be a lexically/syntactically unbiased paraphraser. The core idea is to treat paraphrasing as a zero-shot translation task, which allows the model to cover a wide range of languages effectively.

### BLEU Score Performance
The `Prism` model achieved competitive or superior performance across various language pairs in the WMT 2019 shared metrics task. It outperformed existing evaluation metrics in many cases, showing robustness in both high-resource and low-resource settings.

## Installation
To use `PrismTokenizer`, ensure that the `sentencepiece` package is installed, as it is a required dependency for handling multilingual tokenization.
```bash
pip install sentencepiece
```

## Usage Example

```python
from transformers import PrismForConditionalGeneration, PrismTokenizer

uk_text = "Життя як коробка шоколаду."
ja_text = "人生はチョコレートの箱のようなもの。"

model = PrismForConditionalGeneration.from_pretrained("facebook/prism")
tokenizer = PrismTokenizer.from_pretrained("facebook/prism")

# translate Ukrainian to French
tokenizer.src_lang = "uk"
encoded_uk = tokenizer(uk_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_uk, forced_bos_token_id=tokenizer.get_lang_id("fr"), max_new_tokens=20)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => '<fr> La vie comme une boîte de chocolat.'

# translate Japanese to English
tokenizer.src_lang = "ja"
encoded_ja = tokenizer(ja_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_ja, forced_bos_token_id=tokenizer.get_lang_id("en"), max_new_tokens=20)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => '<en> Life is like a box of chocolate.'
```

## Languages covered
Albanian (sq), Arabic (ar), Bengali (bn), Bulgarian (bg), Catalan; Valencian (ca), Chinese (zh), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Esperanto (eo), Estonian (et), Finnish (fi), French (fr), German (de), Greek, Modern (el), Hebrew (modern) (he), Hungarian (hu), Indonesian (id), Italian (it), Japanese (ja), Kazakh (kk), Latvian (lv), Lithuanian (lt), Macedonian (mk), Norwegian (no), Polish (pl), Portuguese (pt), Romanian, Moldovan (ro), Russian (ru), Serbian (sr), Slovak (sk), Slovene (sl), Spanish; Castilian (es), Swedish (sv), Turkish (tr), Ukrainian (uk), Vietnamese (vi).

## Citation
If you use this model in your research, please cite the original paper:
```
@inproceedings{thompson-post-2020-automatic,
title={Automatic Machine Translation Evaluation in Many Languages via Zero-Shot Paraphrasing},
author={Brian Thompson and Matt Post},
year={2020},
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
address = "Online",
publisher = "Association for Computational Linguistics",
}
```

Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - ar
5
+ - bg
6
+ - bn
7
+ - ca
8
+ - cs
9
+ - da
10
+ - de
11
+ - el
12
+ - en
13
+ - es
14
+ - et
15
+ - eo
16
+ - fi
17
+ - fr
18
+ - he
19
+ - hr
20
+ - hu
21
+ - id
22
+ - it
23
+ - ja
24
+ - kk
25
+ - lt
26
+ - lv
27
+ - mk
28
+ - nl
29
+ - 'no'
30
+ - pl
31
+ - pt
32
+ - ro
33
+ - ru
34
+ - sk
35
+ - sl
36
+ - sq
37
+ - sr
38
+ - sv
39
+ - tr
40
+ - uk
41
+ - vi
42
+ - zh
43
+ tags:
44
+ - text-generation-inference
45
+ ---