cecilemacaire commited on
Commit
c3e9edc
1 Parent(s): dc9c459

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - pictograms
11
  - translation
12
  metrics:
13
- - bleu
14
  inference: false
15
  ---
16
 
@@ -60,7 +60,7 @@ fairseq-train \
60
 
61
  ### Evaluation
62
 
63
- The model was evaluated with BLEU, where we compared the reference pictogram translation with the model hypothesis.
64
 
65
  ```bash
66
  fairseq-generate exp_commonvoice/data-bin/commonvoice.tokenized.fr-frp \
@@ -82,8 +82,8 @@ Generate test with beam=5: BLEU4 = 82.60, 92.5/85.5/79.5/74.1 (BP=1.000, ratio=1
82
  Comparison to other translation models :
83
  | **Model** | **validation** | **test** |
84
  |:-----------:|:-----------------------:|:-----------------------:|
85
- | **t2p-t5-large-commonvoice** | 86.3 | 86.5 |
86
- | t2p-nmt-commonvoice | 86.0 | 82.6 |
87
  | t2p-mbart-large-cc25-commonvoice | 72.3 | 72.3 |
88
  | t2p-nllb-200-distilled-600M-commonvoice | **87.4** | **87.6** |
89
 
 
10
  - pictograms
11
  - translation
12
  metrics:
13
+ - sacrebleu
14
  inference: false
15
  ---
16
 
 
60
 
61
  ### Evaluation
62
 
63
+ The model was evaluated with sacreBLEU, where we compared the reference pictogram translation with the model hypothesis.
64
 
65
  ```bash
66
  fairseq-generate exp_commonvoice/data-bin/commonvoice.tokenized.fr-frp \
 
82
  Comparison to other translation models :
83
  | **Model** | **validation** | **test** |
84
  |:-----------:|:-----------------------:|:-----------------------:|
85
+ | t2p-t5-large-commonvoice | 86.3 | 86.5 |
86
+ | **t2p-nmt-commonvoice** | 86.0 | 82.6 |
87
  | t2p-mbart-large-cc25-commonvoice | 72.3 | 72.3 |
88
  | t2p-nllb-200-distilled-600M-commonvoice | **87.4** | **87.6** |
89