Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-small-sw-ndizi-158_2

This model is a fine-tuned version of pplantinga/whisper-small-sw on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7989

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 18 2.8008
2.5386 2.0 36 2.7557
2.4516 3.0 54 2.6801
2.4516 4.0 72 2.5842
2.363 5.0 90 2.5062
2.2145 6.0 108 2.4334
2.1358 7.0 126 2.3611
2.1358 8.0 144 2.2867
2.059 9.0 162 2.2158
1.8708 10.0 180 2.1452
1.8708 11.0 198 2.0739
1.8568 12.0 216 2.0307
1.8349 13.0 234 2.0092
1.7117 14.0 252 1.9914
1.7117 15.0 270 1.9788
1.6683 16.0 288 1.9669
1.6639 17.0 306 1.9548
1.6639 18.0 324 1.9432
1.6765 19.0 342 1.9338
1.6148 20.0 360 1.9265
1.5875 21.0 378 1.9149
1.5875 22.0 396 1.9058
1.5664 23.0 414 1.9013
1.6485 24.0 432 1.8953
1.535 25.0 450 1.8917
1.535 26.0 468 1.8849
1.5579 27.0 486 1.8783
1.5141 28.0 504 1.8746
1.5141 29.0 522 1.8707
1.5943 30.0 540 1.8663
1.4296 31.0 558 1.8630
1.4895 32.0 576 1.8595
1.4895 33.0 594 1.8536
1.5366 34.0 612 1.8525
1.4573 35.0 630 1.8488
1.4573 36.0 648 1.8489
1.4729 37.0 666 1.8441
1.4758 38.0 684 1.8384
1.4386 39.0 702 1.8391
1.4386 40.0 720 1.8356
1.3773 41.0 738 1.8343
1.4994 42.0 756 1.8328
1.4994 43.0 774 1.8331
1.4342 44.0 792 1.8287
1.4047 45.0 810 1.8283
1.3758 46.0 828 1.8261
1.3758 47.0 846 1.8234
1.3856 48.0 864 1.8201
1.3815 49.0 882 1.8210
1.4364 50.0 900 1.8197
1.4364 51.0 918 1.8188
1.4035 52.0 936 1.8183
1.3368 53.0 954 1.8176
1.3368 54.0 972 1.8155
1.424 55.0 990 1.8158
1.3782 56.0 1008 1.8159
1.3057 57.0 1026 1.8122
1.3057 58.0 1044 1.8136
1.3615 59.0 1062 1.8142
1.4013 60.0 1080 1.8091
1.4013 61.0 1098 1.8099
1.2894 62.0 1116 1.8102
1.3972 63.0 1134 1.8089
1.3564 64.0 1152 1.8096
1.3564 65.0 1170 1.8075
1.2808 66.0 1188 1.8078
1.3871 67.0 1206 1.8061
1.3871 68.0 1224 1.8060
1.267 69.0 1242 1.8070
1.2978 70.0 1260 1.8046
1.3657 71.0 1278 1.8062
1.3657 72.0 1296 1.8061
1.342 73.0 1314 1.8066
1.2504 74.0 1332 1.8063
1.3003 75.0 1350 1.8031
1.3003 76.0 1368 1.8053
1.2927 77.0 1386 1.8057
1.2653 78.0 1404 1.8032
1.2653 79.0 1422 1.8031
1.3574 80.0 1440 1.8036
1.2253 81.0 1458 1.8061
1.3348 82.0 1476 1.8036
1.3348 83.0 1494 1.8034
1.2846 84.0 1512 1.8033
1.2671 85.0 1530 1.8032
1.2671 86.0 1548 1.8038
1.3102 87.0 1566 1.8031
1.2603 88.0 1584 1.8011
1.286 89.0 1602 1.8029
1.286 90.0 1620 1.8026
1.2761 91.0 1638 1.8029
1.2416 92.0 1656 1.8014
1.2416 93.0 1674 1.8035
1.2798 94.0 1692 1.8008
1.3043 95.0 1710 1.8009
1.2969 96.0 1728 1.8004
1.2969 97.0 1746 1.8014
1.3087 98.0 1764 1.7992
1.2364 99.0 1782 1.7997
1.2748 100.0 1800 1.7989

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.37.0.dev0
  • Pytorch 2.0.0
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for smutuvi/whisper-small-sw-ndizi-158_2

Adapter
this model