File size: 2,509 Bytes
413622f f669db4 1b90ed7 f669db4 1b90ed7 e0efb69 f669db4 c0d6f77 1b90ed7 f669db4 1b90ed7 c0d6f77 1b90ed7 413622f e0efb69 413622f e0efb69 413622f f669db4 1cf7d1e c0d6f77 413622f e0efb69 413622f e0efb69 413622f e0efb69 413622f e0efb69 413622f e0efb69 413622f e0efb69 413622f e0efb69 413622f e0efb69 413622f e0efb69 c0d6f77 e0efb69 f669db4 c0d6f77 e0efb69 c0d6f77 e0efb69 413622f e0efb69 413622f c0d6f77 413622f e0efb69 413622f e0efb69 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
base_model: facebook/wav2vec2-base
datasets:
- common_voice_13_0
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-vi-colab
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: vi
split: test[:50%]
args: vi
metrics:
- type: wer
value: 1.0
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-vi-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4884
- Wer: 1.0
- Cer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.1
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:---:|:---:|
| 9.4752 | 7.1111 | 160 | 4.4992 | 1.0 | 1.0 |
| 4.2035 | 14.2222 | 320 | 3.9228 | 1.0 | 1.0 |
| 3.7611 | 21.3333 | 480 | 3.6584 | 1.0 | 1.0 |
| 3.5825 | 28.4444 | 640 | 3.5584 | 1.0 | 1.0 |
| 3.5044 | 35.5556 | 800 | 3.5285 | 1.0 | 1.0 |
| 3.4669 | 42.6667 | 960 | 3.5226 | 1.0 | 1.0 |
| 3.4382 | 49.7778 | 1120 | 3.5093 | 1.0 | 1.0 |
| 3.4183 | 56.8889 | 1280 | 3.4942 | 1.0 | 1.0 |
| 3.4002 | 64.0 | 1440 | 3.4957 | 1.0 | 1.0 |
| 3.3871 | 71.1111 | 1600 | 3.4896 | 1.0 | 1.0 |
| 3.382 | 78.2222 | 1760 | 3.4884 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|