Thienpkae commited on
Commit
c0d6f77
1 Parent(s): ee6cc66

End of training

Browse files
Files changed (1) hide show
  1. README.md +28 -23
README.md CHANGED
@@ -1,18 +1,18 @@
1
  ---
 
2
  base_model: facebook/wav2vec2-base
 
 
3
  datasets:
4
  - common_voice_13_0
5
- license: apache-2.0
6
  metrics:
7
  - wer
8
- tags:
9
- - generated_from_trainer
10
  model-index:
11
  - name: wav2vec2-large-xls-r-vi-colab
12
  results:
13
  - task:
14
- type: automatic-speech-recognition
15
  name: Automatic Speech Recognition
 
16
  dataset:
17
  name: common_voice_13_0
18
  type: common_voice_13_0
@@ -20,9 +20,9 @@ model-index:
20
  split: test[:50%]
21
  args: vi
22
  metrics:
23
- - type: wer
24
- value: 0.9155054191550542
25
- name: Wer
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,9 +32,9 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_13_0 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 2.0995
36
- - Wer: 0.9155
37
- - Cer: 0.4345
38
 
39
  ## Model description
40
 
@@ -53,28 +53,33 @@ More information needed
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
- - learning_rate: 3e-05
57
- - train_batch_size: 16
58
  - eval_batch_size: 8
59
  - seed: 42
60
  - gradient_accumulation_steps: 2
61
- - total_train_batch_size: 32
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
- - lr_scheduler_warmup_steps: 400
65
- - num_epochs: 30
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
71
- |:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
72
- | 14.8797 | 4.4444 | 200 | 4.6129 | 1.0 | 1.0 |
73
- | 3.9436 | 8.8889 | 400 | 3.5521 | 1.0 | 1.0 |
74
- | 3.4845 | 13.3333 | 600 | 3.4997 | 1.0 | 1.0 |
75
- | 3.1358 | 17.7778 | 800 | 2.7899 | 1.0011 | 0.7023 |
76
- | 2.0727 | 22.2222 | 1000 | 2.2606 | 0.9600 | 0.4680 |
77
- | 1.5218 | 26.6667 | 1200 | 2.0995 | 0.9155 | 0.4345 |
 
 
 
 
 
78
 
79
 
80
  ### Framework versions
 
1
  ---
2
+ license: apache-2.0
3
  base_model: facebook/wav2vec2-base
4
+ tags:
5
+ - generated_from_trainer
6
  datasets:
7
  - common_voice_13_0
 
8
  metrics:
9
  - wer
 
 
10
  model-index:
11
  - name: wav2vec2-large-xls-r-vi-colab
12
  results:
13
  - task:
 
14
  name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
  dataset:
17
  name: common_voice_13_0
18
  type: common_voice_13_0
 
20
  split: test[:50%]
21
  args: vi
22
  metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 1.0
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_13_0 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 3.4884
36
+ - Wer: 1.0
37
+ - Cer: 1.0
38
 
39
  ## Model description
40
 
 
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
+ - learning_rate: 1e-05
57
+ - train_batch_size: 32
58
  - eval_batch_size: 8
59
  - seed: 42
60
  - gradient_accumulation_steps: 2
61
+ - total_train_batch_size: 64
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 0.1
65
+ - num_epochs: 80
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
71
+ |:-------------:|:-------:|:----:|:---------------:|:---:|:---:|
72
+ | 9.4752 | 7.1111 | 160 | 4.4992 | 1.0 | 1.0 |
73
+ | 4.2035 | 14.2222 | 320 | 3.9228 | 1.0 | 1.0 |
74
+ | 3.7611 | 21.3333 | 480 | 3.6584 | 1.0 | 1.0 |
75
+ | 3.5825 | 28.4444 | 640 | 3.5584 | 1.0 | 1.0 |
76
+ | 3.5044 | 35.5556 | 800 | 3.5285 | 1.0 | 1.0 |
77
+ | 3.4669 | 42.6667 | 960 | 3.5226 | 1.0 | 1.0 |
78
+ | 3.4382 | 49.7778 | 1120 | 3.5093 | 1.0 | 1.0 |
79
+ | 3.4183 | 56.8889 | 1280 | 3.4942 | 1.0 | 1.0 |
80
+ | 3.4002 | 64.0 | 1440 | 3.4957 | 1.0 | 1.0 |
81
+ | 3.3871 | 71.1111 | 1600 | 3.4896 | 1.0 | 1.0 |
82
+ | 3.382 | 78.2222 | 1760 | 3.4884 | 1.0 | 1.0 |
83
 
84
 
85
  ### Framework versions