shuaijiang's picture
Update README.md
5e6e62e
|
raw
history blame
2.55 kB
metadata
license: apache-2.0
metrics:
  - cer

Welcome

If you find this model helpful, please like this model and star us on https://github.com/LianjiaTech/BELLE !

Belle-distilwhisper-large-v2-zh

Fine tune distilwhisper-large-v2 to improve Chinese speech recognition.

Same to distilwhisper-large-v2, Belle-distilwhisper-large-v2-zh is 5.8 times faster with 51% fewer parameters compare to whisper-large-v2.

Note that distilwhisper-large-v2 can not transcribe Chinese(only output English) on Chinese ASR benchmark(AISHELL1, AISHELL2, WENETSPEECH, HKUST).

Usage


from transformers import pipeline

transcriber = pipeline(
  "automatic-speech-recognition", 
  model="BELLE-2/Belle-distilwhisper-large-v2-zh"
)

transcriber.model.config.forced_decoder_ids = (
  transcriber.tokenizer.get_decoder_prompt_ids(
    language="zh", 
    task="transcribe"
  )
)

transcription = transcriber("my_audio.wav") 

Fine-tuning

Model (Re)Sample Rate Train Datasets Fine-tuning (full or peft)
Belle-distilwhisper-large-v2-zh 16KHz AISHELL-1 AISHELL-2 WenetSpeech HKUST full fine-tuning

If you want to fine-thuning the model on your datasets, please reference to the github repo

CER(%)

Model Parameters(M) Language Tag aishell_1_test aishell_2_test wenetspeech_net wenetspeech_meeting HKUST_dev
whisper-large-v2 1550 Chinese 8.818 6.183 12.343 26.413 31.917
distilwhisper-large-v2 756 Chinese - - - - -
Belle-distilwhisper-large-v2-zh 756 Chinese 5.958 6.477 12.786 17.039 20.771

Citation

Please cite our paper and github when using our code, data or model.

@misc{BELLE,
  author = {BELLEGroup},
  title = {BELLE: Be Everyone's Large Language model Engine},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}