RitchieP commited on
Commit
79a1507
1 Parent(s): e975c6b

End of training

Browse files
Files changed (1) hide show
  1. README.md +30 -3
README.md CHANGED
@@ -1,13 +1,28 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
- base_model: openai/whisper-small
6
  datasets:
7
  - verba_lex_voice
 
 
8
  model-index:
9
  - name: verbalex-zh
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,6 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
16
  # verbalex-zh
17
 
18
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the verba_lex_voice dataset.
 
 
 
19
 
20
  ## Model description
21
 
@@ -41,9 +59,18 @@ The following hyperparameters were used during training:
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
- - training_steps: 1000
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.40.2
 
1
  ---
2
  license: apache-2.0
3
+ base_model: openai/whisper-small
4
  tags:
5
  - generated_from_trainer
 
6
  datasets:
7
  - verba_lex_voice
8
+ metrics:
9
+ - wer
10
  model-index:
11
  - name: verbalex-zh
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: verba_lex_voice
18
+ type: verba_lex_voice
19
+ config: zh
20
+ split: test
21
+ args: zh
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 4.670558798999166
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
  # verbalex-zh
32
 
33
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the verba_lex_voice dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.1147
36
+ - Wer: 4.6706
37
 
38
  ## Model description
39
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - lr_scheduler_warmup_steps: 500
62
+ - training_steps: 3000
63
  - mixed_precision_training: Native AMP
64
 
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
68
+ |:-------------:|:-------:|:----:|:---------------:|:------:|
69
+ | 0.0025 | 5.0505 | 1000 | 0.1035 | 8.5071 |
70
+ | 0.0002 | 10.1010 | 2000 | 0.1130 | 4.7540 |
71
+ | 0.0002 | 15.1515 | 3000 | 0.1147 | 4.6706 |
72
+
73
+
74
  ### Framework versions
75
 
76
  - Transformers 4.40.2