shuaijiang's picture
Update README.md
ed25d13
|
raw
history blame
2.74 kB
metadata
license: apache-2.0
metrics:
  - cer

Welcome

If you find this model helpful, please like this model and star us on https://github.com/LianjiaTech/BELLE !

Belle-distilwhisper-large-v2-zh

Fine tune distilwhisper-large-v2 to enhance Chinese speech recognition capabilities.

Similar to distilwhisper-large-v2, Belle-distilwhisper-large-v2-zh is 5.8 times faster and has 51% fewer parameters compared to whisper-large-v2.

Despite having 51% fewer parameters, Belle-distilwhisper-large-v2-zh achieves a relative improvement of -3% to 35% over whisper-large-v2.

It's important to note that the original distilwhisper-large-v2 cannot transcribe Chinese (it only outputs English).

Usage


from transformers import pipeline

transcriber = pipeline(
  "automatic-speech-recognition", 
  model="BELLE-2/Belle-distilwhisper-large-v2-zh"
)

transcriber.model.config.forced_decoder_ids = (
  transcriber.tokenizer.get_decoder_prompt_ids(
    language="zh", 
    task="transcribe"
  )
)

transcription = transcriber("my_audio.wav") 

Fine-tuning

Model (Re)Sample Rate Train Datasets Fine-tuning (full or peft)
Belle-distilwhisper-large-v2-zh 16KHz AISHELL-1 AISHELL-2 WenetSpeech HKUST full fine-tuning

If you want to fine-thuning the model on your datasets, please reference to the github repo

CER(%) ↓

Model Parameters(M) Language Tag aishell_1_test( ↓ ) aishell_2_test( ↓ ) wenetspeech_net ( ↓ ) wenetspeech_meeting( ↓ ) HKUST_dev( ↓ )
whisper-large-v2 1550 Chinese 8.818% 6.183% 12.343% 26.413% 31.917%
distilwhisper-large-v2 756 Chinese - - - - -
Belle-distilwhisper-large-v2-zh 756 Chinese 5.958% 6.477% 12.786% 17.039% 20.771%

Citation

Please cite our paper and github when using our code, data or model.

@misc{BELLE,
  author = {BELLEGroup},
  title = {BELLE: Be Everyone's Large Language model Engine},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}