TKU410410103 commited on
Commit
439ddda
1 Parent(s): 7cb282c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -107,18 +107,7 @@ The following hyperparameters were used during training:
107
  - num_train_epochs: 10
108
  - lr_scheduler_type: linear
109
 
110
- ### Test results
111
- The final model was evaluated as follows:
112
-
113
- On reazonspeech(tiny):
114
- - WER: 40.519700%
115
- - CER: 23.220979%
116
-
117
- On common_voice_11_0:
118
- - WER: 22.705487%
119
- - CER: 9.399390%
120
-
121
- ### How to use the model
122
 
123
  ```python
124
  from transformers import HubertForCTC, Wav2Vec2Processor
@@ -201,6 +190,18 @@ cer_result = cer.compute(predictions=result["pred_strings"], references=result["
201
  print("WER: {:2f}%".format(100 * wer_result))
202
  print("CER: {:2f}%".format(100 * cer_result))
203
  ```
 
 
 
 
 
 
 
 
 
 
 
 
204
  ### Framework versions
205
 
206
  - Transformers 4.39.1
 
107
  - num_train_epochs: 10
108
  - lr_scheduler_type: linear
109
 
110
+ ### How to evaluate the model
 
 
 
 
 
 
 
 
 
 
 
111
 
112
  ```python
113
  from transformers import HubertForCTC, Wav2Vec2Processor
 
190
  print("WER: {:2f}%".format(100 * wer_result))
191
  print("CER: {:2f}%".format(100 * cer_result))
192
  ```
193
+
194
+ ### Test results
195
+ The final model was evaluated as follows:
196
+
197
+ On reazonspeech(tiny):
198
+ - WER: 40.519700%
199
+ - CER: 23.220979%
200
+
201
+ On common_voice_11_0:
202
+ - WER: 22.705487%
203
+ - CER: 9.399390%
204
+
205
  ### Framework versions
206
 
207
  - Transformers 4.39.1