TKU410410103
commited on
Commit
•
3d3ddd9
1
Parent(s):
d857619
Update README.md
Browse files
README.md
CHANGED
@@ -22,23 +22,12 @@ should probably proofread and complete it, then remove this comment. -->
|
|
22 |
|
23 |
# hubert-large-asr
|
24 |
|
25 |
-
This model is a fine-tuned version of [rinna/japanese-hubert-large](https://huggingface.co/rinna/japanese-hubert-large) ASR. Initially fine-tuned on the Reazonspeech(small) dataset, it was subsequently further fine-tuned on the common_voice_11_0 dataset for ASR tasks.
|
26 |
|
27 |
## Acknowledgments
|
28 |
|
29 |
This model's fine-tuning approach was inspired by and references the training methodology used in [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana).
|
30 |
|
31 |
-
## Model description
|
32 |
-
|
33 |
-
More information needed
|
34 |
-
|
35 |
-
## Intended uses & limitations
|
36 |
-
|
37 |
-
More information needed
|
38 |
-
|
39 |
-
## Training and evaluation data
|
40 |
-
|
41 |
-
More information needed
|
42 |
|
43 |
## Training procedure
|
44 |
|
@@ -92,7 +81,13 @@ The following hyperparameters were used during training:
|
|
92 |
- lr_scheduler_type: linear
|
93 |
|
94 |
### Test results
|
|
|
|
|
|
|
|
|
|
|
95 |
|
|
|
96 |
WER: 22.705487%
|
97 |
CER: 9.399390%
|
98 |
|
|
|
22 |
|
23 |
# hubert-large-asr
|
24 |
|
25 |
+
This model is a fine-tuned version of [rinna/japanese-hubert-large](https://huggingface.co/rinna/japanese-hubert-large) ASR. Initially fine-tuned on the [Reazonspeech(small) dataset](https://huggingface.co/datasets/reazon-research/reazonspeech), it was subsequently further fine-tuned on the [common_voice_11_0 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ja) for ASR tasks.
|
26 |
|
27 |
## Acknowledgments
|
28 |
|
29 |
This model's fine-tuning approach was inspired by and references the training methodology used in [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana).
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
## Training procedure
|
33 |
|
|
|
81 |
- lr_scheduler_type: linear
|
82 |
|
83 |
### Test results
|
84 |
+
The final model was evaluated as follows:
|
85 |
+
|
86 |
+
On Reazonspeech:
|
87 |
+
WER: 40.519700%
|
88 |
+
CER: 23.220979%
|
89 |
|
90 |
+
On common_voice_11_0:
|
91 |
WER: 22.705487%
|
92 |
CER: 9.399390%
|
93 |
|