Edit model card

Whisper Large V3 pl preprocessed - Chee Li

This model is a fine-tuned version of openai/whisper-large-v3 on the Google Fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1369
  • Wer: 332.9440

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.006 5.0251 1000 0.1174 340.0275
0.0002 10.0503 2000 0.1296 200.6596
0.0001 15.0754 3000 0.1343 331.7760
0.0001 20.1005 4000 0.1369 332.9440

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for CheeLi03/whisper-large-v3-pl-preprocessed

Finetuned
this model

Evaluation results