Edit model card

Whisper Tiny En - speechocean762

This model is a fine-tuned version of openai/whisper-tiny.en on the speechocean762 dataset. It achieves the following results on the evaluation set (Best results):

  • Loss: 0.6837
  • Wer: 28.3422

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss Wer
4.0683 0.2778 10 4.0618 38.3139
3.9216 0.5556 20 3.8951 37.4961
3.7275 0.8333 30 3.6337 45.8635
3.3621 1.1111 40 3.2960 36.4580
2.9818 1.3889 50 2.8749 40.7046
2.5404 1.6667 60 2.3590 44.1648
1.9537 1.9444 70 1.7972 49.4495
1.4184 2.2222 80 1.3603 66.3731
1.1875 2.5 90 1.1660 55.5206
1.1203 2.7778 100 1.0743 44.8569
1.024 3.0556 110 1.0085 44.2277
0.905 3.3333 120 0.9581 42.4347
0.8787 3.6111 130 0.9169 40.9877
0.8677 3.8889 140 0.8844 37.2130
0.7563 4.1667 150 0.8573 36.4895
0.7497 4.4444 160 0.8324 35.9862
0.7283 4.7222 170 0.8097 35.2941
0.7055 5.0 180 0.7907 30.6071
0.6259 5.2778 190 0.7770 30.9531
0.6115 5.5556 200 0.7601 30.3555
0.5998 5.8333 210 0.7457 29.8207
0.5752 6.1111 220 0.7368 29.9465
0.5031 6.3889 230 0.7284 29.7892
0.5079 6.6667 240 0.7140 29.0028
0.4969 6.9444 250 0.7006 29.3174
0.4285 7.2222 260 0.6951 32.5889
0.466 7.5 270 0.6886 31.6766
0.4101 7.7778 280 0.6837 28.3422
0.4021 8.0556 290 0.6755 31.4250
0.359 8.3333 300 0.6763 32.5260
0.3281 8.6111 310 0.6727 32.2114
0.3604 8.8889 320 0.6695 36.1120
0.3085 9.1667 330 0.6698 32.1799
0.3007 9.4444 340 0.6698 32.3372
0.3313 9.7222 350 0.6659 35.7974
0.2862 10.0 360 0.6638 32.0226
0.278 10.2778 370 0.6639 31.9912
0.2645 10.5556 380 0.6639 32.0856
0.2708 10.8333 390 0.6649 32.0541
0.257 11.1111 400 0.6620 32.1799
0.2455 11.3889 410 0.6621 31.8025
0.2506 11.6667 420 0.6636 38.9745
0.2545 11.9444 430 0.6635 38.9116
0.2266 12.2222 440 0.6644 31.8339
0.2072 12.5 450 0.6652 32.1799
0.2382 12.7778 460 0.6661 31.9597
0.219 13.0556 470 0.6653 38.7858
0.2256 13.3333 480 0.6649 38.9431
0.2178 13.6111 490 0.6652 38.9431
0.2229 13.8889 500 0.6654 38.8487

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2.dev0
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Pooya-Fallah/whisper-tiny-finetune

Finetuned
(60)
this model

Dataset used to train Pooya-Fallah/whisper-tiny-finetune

Evaluation results