Aviral2412's picture
End of training
db0dd20 verified
metadata
tags:
  - generated_from_trainer
datasets:
  - common_voice_1_0
metrics:
  - wer
model-index:
  - name: fineturning-without-pretraining-2
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: common_voice_1_0
          type: common_voice_1_0
          config: en
          split: validation
          args: en
        metrics:
          - name: Wer
            type: wer
            value: 0.9999353420406052

fineturning-without-pretraining-2

This model is a fine-tuned version of on the common_voice_1_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 779.5451
  • Wer: 0.9999

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 35
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1829.2385 4.29 500 781.0485 0.9999
1459.4001 8.58 1000 777.1782 0.9999
1454.826 12.88 1500 777.3484 0.9999
1448.8867 17.17 2000 788.0052 0.9999
1445.467 21.46 2500 779.9430 0.9999
1438.5691 25.75 3000 786.7927 0.9999
1445.318 30.04 3500 789.1374 0.9999
1442.6181 34.33 4000 779.5451 0.9999

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2