synthesized speech
Recently Published Documents


TOTAL DOCUMENTS

197
(FIVE YEARS 27)

H-INDEX

17
(FIVE YEARS 2)

2022 ◽  
Vol 12 (2) ◽  
pp. 827
Author(s):  
Ki-Seung Lee

Moderate performance in terms of intelligibility and naturalness can be obtained using previously established silent speech interface (SSI) methods. Nevertheless, a common problem associated with SSI has involved deficiencies in estimating the spectrum details, which results in synthesized speech signals that are rough, harsh, and unclear. In this study, harmonic enhancement (HE), was used during postprocessing to alleviate this problem by emphasizing the spectral fine structure of speech signals. To improve the subjective quality of synthesized speech, the difference between synthesized and actual speech was established by calculating the distance in the perceptual domains instead of using the conventional mean square error (MSE). Two deep neural networks (DNNs) were employed to separately estimate the speech spectra and the filter coefficients of HE, connected in a cascading manner. The DNNs were trained to incrementally and iteratively minimize both the MSE and the perceptual distance (PD). A feasibility test showed that the perceptual evaluation of speech quality (PESQ) and the short-time objective intelligibility measure (STOI) were improved by 17.8 and 2.9%, respectively, compared with previous methods. Subjective listening tests revealed that the proposed method yielded perceptually preferred results compared with that of the conventional MSE-based method.


2021 ◽  
Vol 11 (3) ◽  
pp. 1144
Author(s):  
Sung-Woo Byun ◽  
Seok-Pil Lee

Recently, researchers have developed text-to-speech models based on deep learning, which have produced results superior to those of previous approaches. However, because those systems only mimic the generic speaking style of reference audio, it is difficult to assign user-defined emotional types to synthesized speech. This paper proposes an emotional speech synthesizer constructed by embedding not only speaking styles but also emotional styles. We extend speaker embedding to multi-condition embedding by adding emotional embedding in Tacotron, so that the synthesizer can generate emotional speech. An evaluation of the results showed the superiority of the proposed model to a previous model, in terms of emotional expressiveness.


Author(s):  
G. Lan ◽  
◽  
A. S. Fadeev ◽  
A. N. Morgunov ◽  
◽  
...  

This article details the development of methods for the synthesis of phonemes of the human voice based on the analytical description of individual formants. A technique for analyzing the spectrum and spectrograms of original phonemes to obtain the main amplitude-frequency characteristics of the signal components is presented. An algorithm to reconstruct a speech signal based on the obtained sets of parameters is proposed. A technique to assess the quality of synthesized speech elements is described


Sign in / Sign up

Export Citation Format

Share Document