Edit model card

finetune_wav2vec2_960h_six_second

This model is a fine-tuned version of facebook/wav2vec2-base-960h on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8664
  • Wer: 34.7919
  • Cer: 18.1492

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.9855 18.5185 1000 0.8664 34.7919 18.1492
0.5055 37.0370 2000 0.9980 34.5251 18.1828
0.3066 55.5556 3000 1.0063 33.3511 17.2474
0.2186 74.0741 4000 1.1086 32.3372 16.9617
0.1628 92.5926 5000 1.1707 31.4835 16.5416
0.1362 111.1111 6000 1.1494 31.2700 16.4351
0.1069 129.6296 7000 1.2482 31.8837 16.4295
0.1004 148.1481 8000 1.3189 31.5635 16.9393
0.0851 166.6667 9000 1.3079 30.8965 16.3343
0.0794 185.1852 10000 1.3297 30.8698 16.1214

Framework versions

  • Transformers 4.40.2
  • Pytorch 1.12.1+cu116
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ImanNalia/finetune_wav2vec2_960h_six_second

Finetuned
(119)
this model