shuaijiang
commited on
Commit
•
30fe15d
1
Parent(s):
5e6e62e
Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,13 @@ metrics:
|
|
7 |
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
|
8 |
|
9 |
# Belle-distilwhisper-large-v2-zh
|
10 |
-
Fine tune [distilwhisper-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) to
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
|
|
|
|
15 |
|
16 |
## Usage
|
17 |
```python
|
|
|
7 |
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
|
8 |
|
9 |
# Belle-distilwhisper-large-v2-zh
|
10 |
+
Fine tune [distilwhisper-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) to enhance Chinese speech recognition capabilities.
|
11 |
|
12 |
+
Similar to distilwhisper-large-v2, Belle-distilwhisper-large-v2-zh is **5.8 times faster** and has **51% fewer parameters** compared to whisper-large-v2.
|
13 |
|
14 |
+
Despite having 51% fewer parameters, Belle-distilwhisper-large-v2-zh achieves a relative improvement of **-3% to 35%** over whisper-large-v2
|
15 |
+
|
16 |
+
It's important to note that distilwhisper-large-v2 cannot transcribe Chinese (it only outputs English) in the Chinese ASR benchmarks AISHELL1, AISHELL2, WENETSPEECH, and HKUST.
|
17 |
|
18 |
## Usage
|
19 |
```python
|