Edit model card

speecht5_finetuned_emirhan_tr

This model is a fine-tuned version of MohsenABG/speecht5_finetuned_emirhan_tr on the common_voice_13_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5274

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6208 0.3972 100 0.6033
0.6239 0.7944 200 0.5985
0.616 1.1917 300 0.5867
0.6057 1.5889 400 0.5781
0.5971 1.9861 500 0.5718
0.5975 2.3833 600 0.5701
0.5979 2.7805 700 0.5746
0.5913 3.1778 800 0.5651
0.5943 3.5750 900 0.5609
0.5924 3.9722 1000 0.5610
0.5879 4.3694 1100 0.5578
0.5768 4.7666 1200 0.5505
0.5816 5.1639 1300 0.5511
0.5779 5.5611 1400 0.5518
0.5773 5.9583 1500 0.5456
0.5729 6.3555 1600 0.5479
0.5656 6.7527 1700 0.5459
0.5724 7.1500 1800 0.5402
0.5769 7.5472 1900 0.5366
0.5622 7.9444 2000 0.5389
0.5569 8.3416 2100 0.5390
0.5626 8.7388 2200 0.5358
0.5602 9.1360 2300 0.5372
0.5533 9.5333 2400 0.5326
0.5515 9.9305 2500 0.5325
0.5495 10.3277 2600 0.5328
0.5456 10.7249 2700 0.5300
0.5413 11.1221 2800 0.5312
0.5468 11.5194 2900 0.5290
0.5386 11.9166 3000 0.5274

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.3.0+cu118
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
17
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for rahafvii/speecht5_finetuned_emirhan_tr

Finetuned
this model