PyTorch
Catalan
TTS
audio
synthesis
VITS
speech
coqui.ai
Edit model card

Aina Project's Catalan multi-speaker text-to-speech model

Model description

This model was trained from scratch using the Coqui TTS toolkit on a combination of 3 datasets: Festcat, OpenSLR69 and Common Voice v12. For the training, we used 487 hours of recordings from 255 speakers. We have trimmed and denoised the data which all except Common Voice can be found in a seperate dataset in festcat_trimmed_denoised and openslr69_trimmed_denoised.

A live inference demo can be found in our spaces, here.

The model needs our fork of espeak-ng to work correctly. For installation and deployment please consult the docker file of our inference demo.

Intended uses and limitations

You can use this model to generate synthetic speech in Catalan with different voices.

How to use

Usage

Required libraries:

pip install git+https://github.com/coqui-ai/TTS@dev#egg=TTS

Synthesize a speech using python:

import tempfile
import gradio as gr
import numpy as np
import os
import json

from typing import Optional
from TTS.config import load_config
from TTS.utils.manage import ModelManager
from TTS.utils.synthesizer import Synthesizer

model_path = # Absolute path to the model checkpoint.pth
config_path = # Absolute path to the model config.json
speakers_file_path = # Absolute path to speakers.pth file

text = "Text to synthetize"
speaker_idx = "Speaker ID"

synthesizer = Synthesizer(
    model_path, config_path, speakers_file_path, None, None, None,
)
wavs = synthesizer.tts(text, speaker_idx)

Training

Training Procedure

Data preparation

Hyperparameter

The model is based on VITS proposed by Kim et al. The following hyperparameters were set in the coqui framework.

Hyperparameter Value
Model vits
Batch Size 16
Eval Batch Size 8
Mixed Precision false
Window Length 1024
Hop Length 256
FTT size 1024
Num Mels 80
Phonemizer espeak
Phoneme Lenguage ca
Text Cleaners multilingual_cleaners
Formatter vctk_old
Optimizer adam
Adam betas (0.8, 0.99)
Adam eps 1e-09
Adam weight decay 0.01
Learning Rate Gen 0.0001
Lr. schedurer Gen ExponentialLR
Lr. schedurer Gamma Gen 0.999875
Learning Rate Disc 0.0001
Lr. schedurer Disc ExponentialLR
Lr. schedurer Gamma Disc 0.999875

The model was trained for 730962 steps.

Additional information

Author

Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center

Contact information

For further information, send an email to [email protected]

Copyright

Copyright (c) 2023 Language Technologies Unit (LangTech) at Barcelona Supercomputing Center

Licensing Information

Apache License, Version 2.0

Funding

This work was funded by Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.

The training of the model was possible thanks to the compute time given by Galician Supercomputing Center CESGA (Centro de Supercomputación de Galicia).

Disclaimer

Click to expand

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.

When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train projecte-aina/tts-ca-coqui-vits-multispeaker

Spaces using projecte-aina/tts-ca-coqui-vits-multispeaker 2