You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 18, 2024. It is now read-only.
i want to train a model to perform code-switching TTS / voice conversion for only 2 languages: Vietnamese and English. i assume the model should perform well with training data of 1 speaker in Vietnamese and 1 in English, each with a decent number of utterances (~15hrs). my reason is that since there are only 2 speakers & a lot of data, the model should be able to learn the speaker embedding for each, even by remembering (overfitting). similarly for the language-dependent encoder. but I see some of your comments saying it's better to include other languages, each with multiple speakers in the training data, even if you don't use it in inference. why?