tiedeman commited on
Commit
05d0a88
1 Parent(s): b36ff78

Initial commit

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fi
4
+ - ru
5
+ - uk
6
+ - zle
7
+
8
+ tags:
9
+ - translation
10
+
11
+ license: cc-by-4.0
12
+ model-index:
13
+ - name: opus-mt-tc-big-zle-fi
14
+ results:
15
+ - task:
16
+ name: Translation rus-fin
17
+ type: translation
18
+ args: rus-fin
19
+ dataset:
20
+ name: flores101-devtest
21
+ type: flores_101
22
+ args: rus fin devtest
23
+ metrics:
24
+ - name: BLEU
25
+ type: bleu
26
+ value: 17.4
27
+ - task:
28
+ name: Translation ukr-fin
29
+ type: translation
30
+ args: ukr-fin
31
+ dataset:
32
+ name: flores101-devtest
33
+ type: flores_101
34
+ args: ukr fin devtest
35
+ metrics:
36
+ - name: BLEU
37
+ type: bleu
38
+ value: 18.0
39
+ - task:
40
+ name: Translation rus-fin
41
+ type: translation
42
+ args: rus-fin
43
+ dataset:
44
+ name: tatoeba-test-v2021-08-07
45
+ type: tatoeba_mt
46
+ args: rus-fin
47
+ metrics:
48
+ - name: BLEU
49
+ type: bleu
50
+ value: 42.2
51
+ ---
52
+ # opus-mt-tc-big-zle-fi
53
+
54
+ Neural machine translation model for translating from East Slavic languages (zle) to Finnish (fi).
55
+
56
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
57
+
58
+ * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
59
+
60
+ ```
61
+ @inproceedings{tiedemann-thottingal-2020-opus,
62
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
63
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
64
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
65
+ month = nov,
66
+ year = "2020",
67
+ address = "Lisboa, Portugal",
68
+ publisher = "European Association for Machine Translation",
69
+ url = "https://aclanthology.org/2020.eamt-1.61",
70
+ pages = "479--480",
71
+ }
72
+
73
+ @inproceedings{tiedemann-2020-tatoeba,
74
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
75
+ author = {Tiedemann, J{\"o}rg},
76
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
77
+ month = nov,
78
+ year = "2020",
79
+ address = "Online",
80
+ publisher = "Association for Computational Linguistics",
81
+ url = "https://aclanthology.org/2020.wmt-1.139",
82
+ pages = "1174--1182",
83
+ }
84
+ ```
85
+
86
+ ## Model info
87
+
88
+ * Release: 2022-03-07
89
+ * source language(s): rus ukr
90
+ * target language(s): fin
91
+ * model: transformer-big
92
+ * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
93
+ * tokenization: SentencePiece (spm32k,spm32k)
94
+ * original model: [opusTCv20210807+bt_transformer-big_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.zip)
95
+ * more information released models: [OPUS-MT zle-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-fin/README.md)
96
+
97
+ ## Usage
98
+
99
+ A short example code:
100
+
101
+ ```python
102
+ from transformers import MarianMTModel, MarianTokenizer
103
+
104
+ src_text = [
105
+ "Мы уже проголосовали.",
106
+ "Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять."
107
+ ]
108
+
109
+ model_name = "pytorch-models/opus-mt-tc-big-zle-fi"
110
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
111
+ model = MarianMTModel.from_pretrained(model_name)
112
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
113
+
114
+ for t in translated:
115
+ print( tokenizer.decode(t, skip_special_tokens=True) )
116
+
117
+ # expected output:
118
+ # Olemme jo äänestäneet.
119
+ # Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen.
120
+ ```
121
+
122
+ You can also use OPUS-MT models with the transformers pipelines, for example:
123
+
124
+ ```python
125
+ from transformers import pipeline
126
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-fi")
127
+ print(pipe("Мы уже проголосовали."))
128
+
129
+ # expected output: Olemme jo äänestäneet.
130
+ ```
131
+
132
+ ## Benchmarks
133
+
134
+ * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.test.txt)
135
+ * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt)
136
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
137
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
138
+
139
+ | langpair | testset | chr-F | BLEU | #sent | #words |
140
+ |----------|---------|-------|-------|-------|--------|
141
+ | rus-fin | tatoeba-test-v2021-08-07 | 0.66334 | 42.2 | 3643 | 19319 |
142
+ | rus-fin | flores101-devtest | 0.52577 | 17.4 | 1012 | 18781 |
143
+ | ukr-fin | flores101-devtest | 0.53440 | 18.0 | 1012 | 18781 |
144
+
145
+ ## Acknowledgements
146
+
147
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
148
+
149
+ ## Model conversion info
150
+
151
+ * transformers version: 4.16.2
152
+ * OPUS-MT git hash: 42126b6
153
+ * port time: Thu Mar 24 09:28:52 EET 2022
154
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ rus-fin flores101-dev 0.53231 17.7 997 17938
2
+ rus-fin flores101-devtest 0.52577 17.4 1012 18781
3
+ ukr-fin flores101-devtest 0.53440 18.0 1012 18781
4
+ ukr-fin flores101-dev 0.53075 18.7 997 17938
5
+ rus-fin tatoeba-test-v2020-07-28 0.66334 42.2 3643 19319
6
+ rus-fin tatoeba-test-v2021-03-30 0.66334 42.2 3643 19319
7
+ rus-fin tatoeba-test-v2021-08-07 0.66334 42.2 3643 19319
benchmark_translations.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ed59cdf92e8ed500112ef6df8c497e1ee923c2ad52f65bcefc9b166d66410d9
3
+ size 1133647
config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.0,
3
+ "activation_function": "relu",
4
+ "architectures": [
5
+ "MarianMTModel"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "bad_words_ids": [
9
+ [
10
+ 61259
11
+ ]
12
+ ],
13
+ "bos_token_id": 0,
14
+ "classifier_dropout": 0.0,
15
+ "d_model": 1024,
16
+ "decoder_attention_heads": 16,
17
+ "decoder_ffn_dim": 4096,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 6,
20
+ "decoder_start_token_id": 61259,
21
+ "decoder_vocab_size": 61260,
22
+ "dropout": 0.1,
23
+ "encoder_attention_heads": 16,
24
+ "encoder_ffn_dim": 4096,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 6,
27
+ "eos_token_id": 23070,
28
+ "forced_eos_token_id": 23070,
29
+ "init_std": 0.02,
30
+ "is_encoder_decoder": true,
31
+ "max_length": 512,
32
+ "max_position_embeddings": 1024,
33
+ "model_type": "marian",
34
+ "normalize_embedding": false,
35
+ "num_beams": 4,
36
+ "num_hidden_layers": 6,
37
+ "pad_token_id": 61259,
38
+ "scale_embedding": true,
39
+ "share_encoder_decoder_embeddings": true,
40
+ "static_position_embeddings": true,
41
+ "torch_dtype": "float16",
42
+ "transformers_version": "4.18.0.dev0",
43
+ "use_cache": true,
44
+ "vocab_size": 61260
45
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ade5b5b759e7929aa126be91e5d69b3d98b21ae3d2584e5bab7c7b85541959d1
3
+ size 603849923
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a03e2e58a26037120f0b48fb209e238608d79f7092c6e90c7a30a5ba2b29874
3
+ size 997169
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc41cc04b755b4479e1e585bde98f3342065ef81dbf021eb05666ba61c0c6d27
3
+ size 824682
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "zle", "target_lang": "fi", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20210807+bt_transformer-big_2022-03-07/zle-fi", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff