--- license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: w2v-bert-2.0-wol-v1 results: [] --- # w2v-bert-2.0-wol-v1 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1008 - Wer: 0.0792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.6351 | 0.6857 | 300 | 0.2974 | 0.3040 | | 0.4591 | 1.3714 | 600 | 0.2215 | 0.2307 | | 0.3833 | 2.0571 | 900 | 0.1950 | 0.1900 | | 0.329 | 2.7429 | 1200 | 0.1637 | 0.1614 | | 0.2797 | 3.4286 | 1500 | 0.1515 | 0.1479 | | 0.2558 | 4.1143 | 1800 | 0.1435 | 0.1337 | | 0.2166 | 4.8 | 2100 | 0.1296 | 0.1295 | | 0.1876 | 5.4857 | 2400 | 0.1178 | 0.1129 | | 0.1695 | 6.1714 | 2700 | 0.1107 | 0.1005 | | 0.137 | 6.8571 | 3000 | 0.1064 | 0.0933 | | 0.1078 | 7.5429 | 3300 | 0.1049 | 0.0929 | | 0.0904 | 8.2286 | 3600 | 0.1002 | 0.0871 | | 0.0685 | 8.9143 | 3900 | 0.0973 | 0.0810 | | 0.049 | 9.6 | 4200 | 0.1008 | 0.0792 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1