Edit model card

FSMT

Model description

This is a ported version of fairseq wmt19 transformer for de-en.

For more details, please see, Facebook FAIR's WMT19 News Translation Task Submission.

The abbreviation FSMT stands for FairSeqMachineTranslation

All four models are available:

Intended uses & limitations

How to use

from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "facebook/wmt19-de-en"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)

input = "Maschinelles Lernen ist großartig, oder?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Machine learning is great, isn't it?

Limitations and bias

  • The original (and this ported model) doesn't seem to handle well inputs with repeated sub-phrases, content gets truncated

Training data

Pretrained weights were left identical to the original model released by fairseq. For more details, please, see the paper.

Eval results

pair fairseq transformers
de-en 42.3 41.35

The score is slightly below the score reported by fairseq, since `transformers`` currently doesn't support:

  • model ensemble, therefore the best performing checkpoint was ported (model4.pt).
  • re-ranking

The score was calculated using this code:

git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=de-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=15
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS

note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with --num_beams 50.

Data Sources

BibTeX entry and citation info

@inproceedings{...,
  year={2020},
  title={Facebook FAIR's WMT19 News Translation Task Submission},
  author={Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey},
  booktitle={Proc. of WMT},
}

TODO

  • port model ensemble (fairseq uses 4 model checkpoints)
Downloads last month
7,329
Safetensors
Model size
270M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for facebook/wmt19-de-en

Finetunes
3 models

Dataset used to train facebook/wmt19-de-en

Spaces using facebook/wmt19-de-en 3