Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: pt
|
3 |
+
datasets:
|
4 |
+
- common_voice
|
5 |
+
- mls
|
6 |
+
- cetuc
|
7 |
+
- lapsbm
|
8 |
+
- voxforge
|
9 |
+
metrics:
|
10 |
+
- wer
|
11 |
+
tags:
|
12 |
+
- audio
|
13 |
+
- speech
|
14 |
+
- wav2vec2
|
15 |
+
- pt
|
16 |
+
- portuguese-speech-corpus
|
17 |
+
- automatic-speech-recognition
|
18 |
+
- speech
|
19 |
+
- PyTorch
|
20 |
+
license: apache-2.0
|
21 |
+
---
|
22 |
+
|
23 |
+
# Wav2vec 2.0 With Open Brazilian Portuguese Datasets v2
|
24 |
+
|
25 |
+
This a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
|
26 |
+
|
27 |
+
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
|
28 |
+
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
|
29 |
+
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
|
30 |
+
- [Common Voice 6.1](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages to train ASR models. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). The set in Portuguese (mostly Brazilian variant) used in this work is the 6.1 version (pt_63h_2020-12-11) that contains about 50 validated hours and 1,120 unique speakers.
|
31 |
+
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
|
32 |
+
|
33 |
+
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively.
|
34 |
+
|
35 |
+
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one.
|
36 |
+
|
37 |
+
__NOTE: The common voice test reports 10% of WER, however, this model was trained using all the validated instances of Common Voice, except the instances of the test set. This means that some speakers of the train set can be present on the test set.__
|
38 |
+
|
39 |
+
## Imports and dependencies
|
40 |
+
|
41 |
+
|
42 |
+
```python
|
43 |
+
%%capture
|
44 |
+
!pip install datasets
|
45 |
+
!pip install jiwer
|
46 |
+
!pip install torchaudio
|
47 |
+
!pip install transformers
|
48 |
+
!pip install soundfile
|
49 |
+
```
|
50 |
+
|
51 |
+
|
52 |
+
```python
|
53 |
+
import torchaudio
|
54 |
+
from datasets import load_dataset, load_metric
|
55 |
+
from transformers import (
|
56 |
+
Wav2Vec2ForCTC,
|
57 |
+
Wav2Vec2Processor,
|
58 |
+
)
|
59 |
+
import torch
|
60 |
+
import re
|
61 |
+
import sys
|
62 |
+
```
|
63 |
+
|
64 |
+
## Preparation
|
65 |
+
|
66 |
+
|
67 |
+
```python
|
68 |
+
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
|
69 |
+
wer = load_metric("wer")
|
70 |
+
device = "cuda"
|
71 |
+
```
|
72 |
+
|
73 |
+
|
74 |
+
```python
|
75 |
+
model_name = 'lgris/wav2vec2-large-xlsr-open-brazilian-portuguese-v2'
|
76 |
+
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
|
77 |
+
processor = Wav2Vec2Processor.from_pretrained(model_name)
|
78 |
+
```
|
79 |
+
|
80 |
+
|
81 |
+
```python
|
82 |
+
def map_to_pred(batch):
|
83 |
+
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
|
84 |
+
input_values = features.input_values.to(device)
|
85 |
+
attention_mask = features.attention_mask.to(device)
|
86 |
+
with torch.no_grad():
|
87 |
+
logits = model(input_values, attention_mask=attention_mask).logits
|
88 |
+
pred_ids = torch.argmax(logits, dim=-1)
|
89 |
+
batch["predicted"] = processor.batch_decode(pred_ids)
|
90 |
+
batch["predicted"] = [pred.lower() for pred in batch["predicted"]]
|
91 |
+
batch["target"] = batch["sentence"]
|
92 |
+
return batch
|
93 |
+
```
|
94 |
+
|
95 |
+
## Tests
|
96 |
+
|
97 |
+
### Test against Common Voice (In-domain)
|
98 |
+
|
99 |
+
|
100 |
+
```python
|
101 |
+
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
|
102 |
+
|
103 |
+
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
|
104 |
+
|
105 |
+
def map_to_array(batch):
|
106 |
+
speech, _ = torchaudio.load(batch["path"])
|
107 |
+
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
|
108 |
+
batch["sampling_rate"] = resampler.new_freq
|
109 |
+
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
|
110 |
+
return batch
|
111 |
+
```
|
112 |
+
|
113 |
+
|
114 |
+
```python
|
115 |
+
ds = dataset.map(map_to_array)
|
116 |
+
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
|
117 |
+
print(wer.compute(predictions=result["predicted"], references=result["target"]))
|
118 |
+
for pred, target in zip(result["predicted"][:10], result["target"][:10]):
|
119 |
+
print(pred, "|", target)
|
120 |
+
```
|
121 |
+
|
122 |
+
**Result**: 10.69%
|
123 |
+
|
124 |
+
### Test against [TEDx](http://www.openslr.org/100/) (Out-of-domain)
|
125 |
+
|
126 |
+
|
127 |
+
```python
|
128 |
+
!gdown --id 1HJEnvthaGYwcV_whHEywgH2daIN4bQna
|
129 |
+
!tar -xf tedx.tar.gz
|
130 |
+
```
|
131 |
+
|
132 |
+
|
133 |
+
```python
|
134 |
+
dataset = load_dataset('csv', data_files={'test': 'test.csv'})['test']
|
135 |
+
|
136 |
+
def map_to_array(batch):
|
137 |
+
speech, _ = torchaudio.load(batch["path"])
|
138 |
+
batch["speech"] = speech.squeeze(0).numpy()
|
139 |
+
batch["sampling_rate"] = resampler.new_freq
|
140 |
+
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
|
141 |
+
return batch
|
142 |
+
```
|
143 |
+
|
144 |
+
|
145 |
+
```python
|
146 |
+
ds = dataset.map(map_to_array)
|
147 |
+
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
|
148 |
+
print(wer.compute(predictions=result["predicted"], references=result["target"]))
|
149 |
+
for pred, target in zip(result["predicted"][:10], result["target"][:10]):
|
150 |
+
print(pred, "|", target)
|
151 |
+
```
|
152 |
+
|
153 |
+
**Result**: 34.53%
|