Datasets:

ArXiv:
License:
jiaye commited on
Commit
1a527f1
1 Parent(s): 01eb33d

An initial description of the CVSS corpus.

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md CHANGED
@@ -1,3 +1,57 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ # CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus
6
+
7
+
8
+ *CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the [Common Voice](https://commonvoice.mozilla.org/) speech corpus and the [CoVoST 2](https://github.com/facebookresearch/covost) speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the [LibriTTS](http://www.openslr.org/60/) corpus.
9
+
10
+ CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:
11
+
12
+ - *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.
13
+
14
+ - *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.
15
+
16
+ Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.
17
+
18
+ In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.
19
+
20
+ Please check out [our paper](https://arxiv.org/abs/2201.03713) for the detailed description of this corpus, as well as the baseline models we trained on both datasets.
21
+
22
+
23
+ # Load the data
24
+
25
+
26
+
27
+
28
+ ```py
29
+ from datasets import load_dataset
30
+
31
+ # Load only ar-en and ja-en language pairs. Omitting the `languages` argument
32
+ # would load all the language pairs.
33
+ cvss_c = load_dataset('cvss', 'cvss_c', languages=['ar', 'ja'])
34
+
35
+ # Print the structure of the dataset.
36
+ print(cvss_c)
37
+ ```
38
+
39
+ # License
40
+
41
+ CVSS is released under the very permissive [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
42
+
43
+
44
+ ## Citation
45
+
46
+ Please cite this paper when referencing the CVSS corpus:
47
+
48
+ ```
49
+ @inproceedings{jia2022cvss,
50
+ title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation},
51
+ author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga},
52
+ booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)},
53
+ pages={6691--6703},
54
+ year={2022}
55
+ }
56
+ ```
57
+