Considerations on speaking style and speaker variability in speech synthesis

1991 ◽  
Vol 89 (4B) ◽  
pp. 1892-1893
Author(s):  
Lennart Nord ◽  
Björn Granström
2020 ◽  
Vol 26 (4) ◽  
pp. 434-453
Author(s):  
Milan Sečujski ◽  
Darko Pekar ◽  
Siniša Suzić ◽  
Anton Smirnov ◽  
Tijana Nosek

The paper presents a novel architecture and method for training neural networks to produce synthesized speech in a particular voice and speaking style, based on a small quantity of target speaker/style training data. The method is based on neural network embedding, i.e. mapping of discrete variables into continuous vectors in a low-dimensional space, which has been shown to be a very successful universal deep learning technique. In this particular case, different speaker/style combinations are mapped into different points in a low-dimensional space, which enables the network to capture the similarities and differences between speakers and speaking styles more efficiently. The initial model from which speaker/style adaptation was carried out was a multi-speaker/multi-style model based on 8.5 hours of American English speech data which corresponds to 16 different speaker/style combinations. The results of the experiments show that both versions of the obtained system, one using 10 minutes and the other as little as 30 seconds of target data, outperform the state of the art in parametric speaker/style-dependent speech synthesis. This opens a wide range of application of speaker/style dependent speech synthesis based on small quantities of training data, in domains ranging from customer interaction in call centers to robot-assisted medical therapy.


Author(s):  
Jennifer Williams

Preliminary experiments in this dissertation show that it is possible to factorize specific types of information from the speech signal in an abstract embedding space using machine learning. This information includes characteristics of the recording environment, speaking style, and speech quality. Based on these findings, a new technique is proposed to factorize multiple types of information from the speech signal simultaneously using a combination of state-of-the-art machine learning methods for speech processing. Successful speech signal factorization will lead to advances across many speech technologies, including improved speaker identification, detection of speech audio deep fakes, and controllable expression in speech synthesis.


2009 ◽  
Author(s):  
Robert E. Remez ◽  
Kathryn R. Dubowski ◽  
Morgana L. Davids ◽  
Emily F. Thomas ◽  
Nina Paddu ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document