Edit model card

opus-mt-lg-en-finetuned-lm-to-en

This model is a fine-tuned version of Helsinki-NLP/opus-mt-lg-en on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3798
  • Bleu: 91.6784
  • Gen Len: 10.7258

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 143 2.9523 6.6676 10.7022
No log 2.0 286 2.2556 13.1367 11.2939
No log 3.0 429 1.7848 22.8311 10.5799
2.933 4.0 572 1.4150 32.5314 10.8738
2.933 5.0 715 1.1421 45.7271 10.6075
2.933 6.0 858 0.9408 57.5952 10.6489
1.3478 7.0 1001 0.7880 65.9701 10.716
1.3478 8.0 1144 0.6725 73.9167 10.5937
1.3478 9.0 1287 0.6007 78.7528 10.645
1.3478 10.0 1430 0.5479 81.3326 10.643
0.6179 11.0 1573 0.5037 83.8208 10.6036
0.6179 12.0 1716 0.4726 85.822 10.6489
0.6179 13.0 1859 0.4489 87.3239 10.6529
0.3054 14.0 2002 0.4261 88.3752 10.6844
0.3054 15.0 2145 0.4220 89.6516 10.6923
0.3054 16.0 2288 0.4019 90.5766 10.6805
0.3054 17.0 2431 0.3967 91.0437 10.6982
0.1578 18.0 2574 0.3903 91.0284 10.714
0.1578 19.0 2717 0.3889 90.7559 10.7554
0.1578 20.0 2860 0.3846 91.3402 10.7101
0.0886 21.0 3003 0.3842 91.3279 10.7318
0.0886 22.0 3146 0.3888 91.6286 10.7199
0.0886 23.0 3289 0.3829 91.3329 10.7613
0.0886 24.0 3432 0.3824 91.42 10.7337
0.0536 25.0 3575 0.3849 91.5798 10.7298
0.0536 26.0 3718 0.3809 91.5241 10.7258
0.0536 27.0 3861 0.3810 91.7031 10.7199
0.0392 28.0 4004 0.3807 91.6602 10.712
0.0392 29.0 4147 0.3801 91.6217 10.7199
0.0392 30.0 4290 0.3798 91.6784 10.7258

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
4
Safetensors
Model size
75.1M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Eyesiga/opus-mt-lg-en-finetuned-lm-to-en

Finetuned
(4)
this model