游雁 commited on
Commit
4f7d8f9
1 Parent(s): 49495c5
Files changed (8) hide show
  1. .gitattributes +1 -0
  2. README.md +174 -0
  3. config.yaml +46 -0
  4. configuration.json +15 -0
  5. example/punc_example.txt +3 -0
  6. fig/struct.png +0 -0
  7. model.pt +3 -0
  8. tokens.json +0 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ example/punc_example.txt filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -3,3 +3,177 @@ license: other
3
  license_name: model-license
4
  license_link: https://github.com/alibaba-damo-academy/FunASR
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: model-license
4
  license_link: https://github.com/alibaba-damo-academy/FunASR
5
  ---
6
+
7
+
8
+ # FunASR: A Fundamental End-to-End Speech Recognition Toolkit
9
+
10
+
11
+ [![PyPI](https://img.shields.io/pypi/v/funasr)](https://pypi.org/project/funasr/)
12
+
13
+
14
+ <strong>FunASR</strong> hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for Fun!
15
+
16
+ [**Highlights**](#highlights)
17
+ | [**News**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
18
+ | [**Installation**](#installation)
19
+ | [**Quick Start**](#quick-start)
20
+ | [**Runtime**](./runtime/readme.md)
21
+ | [**Model Zoo**](#model-zoo)
22
+ | [**Contact**](#contact)
23
+
24
+
25
+ <a name="highlights"></a>
26
+ ## Highlights
27
+ - FunASR is a fundamental speech recognition toolkit that offers a variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration, Language Models, Speaker Verification, Speaker Diarization and multi-talker ASR. FunASR provides convenient scripts and tutorials, supporting inference and fine-tuning of pre-trained models.
28
+ - We have released a vast collection of academic and industrial pretrained models on the [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition) and [huggingface](https://huggingface.co/FunASR), which can be accessed through our [Model Zoo](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md). The representative [Paraformer-large](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary), a non-autoregressive end-to-end speech recognition model, has the advantages of high accuracy, high efficiency, and convenient deployment, supporting the rapid construction of speech recognition services. For more details on service deployment, please refer to the [service deployment document](runtime/readme_cn.md).
29
+
30
+
31
+ <a name="Installation"></a>
32
+ ## Installation
33
+
34
+ ```shell
35
+ pip3 install -U funasr
36
+ ```
37
+ Or install from source code
38
+ ``` sh
39
+ git clone https://github.com/alibaba/FunASR.git && cd FunASR
40
+ pip3 install -e ./
41
+ ```
42
+ Install modelscope for the pretrained models (Optional)
43
+
44
+ ```shell
45
+ pip3 install -U modelscope
46
+ ```
47
+
48
+ ## Model Zoo
49
+ FunASR has open-sourced a large number of pre-trained models on industrial data. You are free to use, copy, modify, and share FunASR models under the [Model License Agreement](./MODEL_LICENSE). Below are some representative models, for more models please refer to the [Model Zoo]().
50
+
51
+ (Note: 🤗 represents the Huggingface model zoo link, ⭐ represents the ModelScope model zoo link)
52
+
53
+
54
+ | Model Name | Task Details | Training Data | Parameters |
55
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:--------------------------------:|:----------:|
56
+ | paraformer-zh <br> ([⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) [🤗]() ) | speech recognition, with timestamps, non-streaming | 60000 hours, Mandarin | 220M |
57
+ | <nobr>paraformer-zh-streaming <br> ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [🤗]() )</nobr> | speech recognition, streaming | 60000 hours, Mandarin | 220M |
58
+ | paraformer-en <br> ( [⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [🤗]() ) | speech recognition, with timestamps, non-streaming | 50000 hours, English | 220M |
59
+ | conformer-en <br> ( [⭐](https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [🤗]() ) | speech recognition, non-streaming | 50000 hours, English | 220M |
60
+ | ct-punc <br> ( [⭐](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [🤗]() ) | punctuation restoration | 100M, Mandarin and English | 1.1G |
61
+ | fsmn-vad <br> ( [⭐](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) [🤗]() ) | voice activity detection | 5000 hours, Mandarin and English | 0.4M |
62
+ | fa-zh <br> ( [⭐](https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) [🤗]() ) | timestamp prediction | 5000 hours, Mandarin | 38M |
63
+ | cam++ <br> ( [⭐](https://modelscope.cn/models/iic/speech_campplus_sv_zh-cn_16k-common/summary) [🤗]() ) | speaker verification/diarization | 5000 hours | 7.2M |
64
+
65
+
66
+
67
+
68
+ [//]: # ()
69
+ [//]: # (FunASR supports pre-trained or further fine-tuned models for deployment as a service. The CPU version of the Chinese offline file conversion service has been released, details can be found in [docs]&#40;funasr/runtime/docs/SDK_tutorial.md&#41;. More detailed information about service deployment can be found in the [deployment roadmap]&#40;funasr/runtime/readme_cn.md&#41;.)
70
+
71
+
72
+ <a name="quick-start"></a>
73
+ ## Quick Start
74
+
75
+ Below is a quick start tutorial. Test audio files ([Mandarin](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav), [English]()).
76
+
77
+ ### Command-line usage
78
+
79
+ ```shell
80
+ funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=asr_example_zh.wav
81
+ ```
82
+
83
+ Notes: Support recognition of single audio file, as well as file list in Kaldi-style wav.scp format: `wav_id wav_pat`
84
+
85
+ ### Speech Recognition (Non-streaming)
86
+ ```python
87
+ from funasr import AutoModel
88
+ # paraformer-zh is a multi-functional asr model
89
+ # use vad, punc, spk or not as you need
90
+ model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
91
+ vad_model="fsmn-vad", vad_model_revision="v2.0.4",
92
+ punc_model="ct-punc-c", punc_model_revision="v2.0.4",
93
+ # spk_model="cam++", spk_model_revision="v2.0.2",
94
+ )
95
+ res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
96
+ batch_size_s=300,
97
+ hotword='魔搭')
98
+ print(res)
99
+ ```
100
+ Note: `model_hub`: represents the model repository, `ms` stands for selecting ModelScope download, `hf` stands for selecting Huggingface download.
101
+
102
+ ### Speech Recognition (Streaming)
103
+ ```python
104
+ from funasr import AutoModel
105
+
106
+ chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
107
+ encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
108
+ decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
109
+
110
+ model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
111
+
112
+ import soundfile
113
+ import os
114
+
115
+ wav_file = os.path.join(model.model_path, "example/asr_example.wav")
116
+ speech, sample_rate = soundfile.read(wav_file)
117
+ chunk_stride = chunk_size[1] * 960 # 600ms
118
+
119
+ cache = {}
120
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
121
+ for i in range(total_chunk_num):
122
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
123
+ is_final = i == total_chunk_num - 1
124
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
125
+ print(res)
126
+ ```
127
+ Note: `chunk_size` is the configuration for streaming latency.` [0,10,5]` indicates that the real-time display granularity is `10*60=600ms`, and the lookahead information is `5*60=300ms`. Each inference input is `600ms` (sample points are `16000*0.6=960`), and the output is the corresponding text. For the last speech segment input, `is_final=True` needs to be set to force the output of the last word.
128
+
129
+ ### Voice Activity Detection (Non-Streaming)
130
+ ```python
131
+ from funasr import AutoModel
132
+
133
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
134
+ wav_file = f"{model.model_path}/example/asr_example.wav"
135
+ res = model.generate(input=wav_file)
136
+ print(res)
137
+ ```
138
+ ### Voice Activity Detection (Streaming)
139
+ ```python
140
+ from funasr import AutoModel
141
+
142
+ chunk_size = 200 # ms
143
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
144
+
145
+ import soundfile
146
+
147
+ wav_file = f"{model.model_path}/example/vad_example.wav"
148
+ speech, sample_rate = soundfile.read(wav_file)
149
+ chunk_stride = int(chunk_size * sample_rate / 1000)
150
+
151
+ cache = {}
152
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
153
+ for i in range(total_chunk_num):
154
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
155
+ is_final = i == total_chunk_num - 1
156
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
157
+ if len(res[0]["value"]):
158
+ print(res)
159
+ ```
160
+ ### Punctuation Restoration
161
+ ```python
162
+ from funasr import AutoModel
163
+
164
+ model = AutoModel(model="ct-punc", model_revision="v2.0.4")
165
+ res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
166
+ print(res)
167
+ ```
168
+ ### Timestamp Prediction
169
+ ```python
170
+ from funasr import AutoModel
171
+
172
+ model = AutoModel(model="fa-zh", model_revision="v2.0.4")
173
+ wav_file = f"{model.model_path}/example/asr_example.wav"
174
+ text_file = f"{model.model_path}/example/text.txt"
175
+ res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
176
+ print(res)
177
+ ```
178
+
179
+ More examples ref to [docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining)
config.yaml ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model: CTTransformer
2
+ model_conf:
3
+ ignore_id: 0
4
+ embed_unit: 516
5
+ att_unit: 516
6
+ dropout_rate: 0.1
7
+ punc_list:
8
+ - <unk>
9
+ - _
10
+ - ,
11
+ - 。
12
+ - ?
13
+ - 、
14
+ punc_weight:
15
+ - 1.0
16
+ - 1.0
17
+ - 1.0
18
+ - 1.0
19
+ - 1.0
20
+ - 1.0
21
+ sentence_end_id: 3
22
+
23
+ encoder: SANMEncoder
24
+ encoder_conf:
25
+ input_size: 516
26
+ output_size: 516
27
+ attention_heads: 12
28
+ linear_units: 2048
29
+ num_blocks: 12
30
+ dropout_rate: 0.1
31
+ positional_dropout_rate: 0.1
32
+ attention_dropout_rate: 0.0
33
+ input_layer: pe
34
+ pos_enc_class: SinusoidalPositionEncoder
35
+ normalize_before: true
36
+ kernel_size: 11
37
+ sanm_shfit: 0
38
+ selfattention_layer_type: sanm
39
+ padding_idx: 0
40
+
41
+ tokenizer: CharTokenizer
42
+ tokenizer_conf:
43
+ unk_symbol: <unk>
44
+
45
+
46
+
configuration.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "framework": "pytorch",
3
+ "task" : "punctuation",
4
+ "model": {"type" : "funasr"},
5
+ "pipeline": {"type":"funasr-pipeline"},
6
+ "model_name_in_hub": {
7
+ "ms":"iic/punc_ct-transformer_cn-en-common-vocab471067-large",
8
+ "hf":""},
9
+ "file_path_metas": {
10
+ "init_param":"model.pt",
11
+ "config":"config.yaml",
12
+ "tokenizer_conf": {"token_list": "tokens.json", "jieba_usr_dict": "jieba_usr_dict"},
13
+ "jieba_usr_dict": "jieba_usr_dict"
14
+ }
15
+ }
example/punc_example.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd2b636a95cdcdcc09f3be055404025d7867343062318c13e95c378e998ef890
3
+ size 866
fig/struct.png ADDED
model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7176cae922a872e130e6b88aef9a1153581711baf79c9124c7c95be383cd6f81
3
+ size 1125507622
tokens.json ADDED
The diff for this file is too large to render. See raw diff