scholarly journals Noise-robust voice conversion with domain adversarial training

2022 ◽  
Author(s):  
Hongqiang Du ◽  
Lei Xie ◽  
Haizhou Li
2020 ◽  
Vol 27 ◽  
pp. 1769-1773
Author(s):  
Hyungjun Lim ◽  
Younggwan Kim ◽  
Hoirin Kim

2019 ◽  
Vol 10 (1) ◽  
pp. 151
Author(s):  
Xiaokong Miao ◽  
Meng Sun ◽  
Xiongwei Zhang ◽  
Yimin Wang

This paper presents a noise-robust voice conversion method with high-quefrency boosting via sub-band cepstrum conversion and fusion based on the bidirectional long short-term memory (BLSTM) neural networks that can convert parameters of vocal tracks of a source speaker into those of a target speaker. With the implementation of state-of-the-art machine learning methods, voice conversion has achieved good performance given abundant clean training data. However, the quality and similarity of the converted voice are significantly degraded compared to that of a natural target voice due to various factors, such as limited training data and noisy input speech from the source speaker. To address the problem of noisy input speech, an architecture of voice conversion with statistical filtering and sub-band cepstrum conversion and fusion is introduced. The impact of noises on the converted voice is reduced by the accurate reconstruction of the sub-band cepstrum and the subsequent statistical filtering. By normalizing the mean and variance of the converted cepstrum to those of the target cepstrum in the training phase, a cepstrum filter was constructed to further improve the quality of the converted voice. The experimental results showed that the proposed method significantly improved the naturalness and similarity of the converted voice compared to the baselines, even with the noisy inputs of source speakers.


2015 ◽  
Vol 2015 ◽  
pp. 1-9
Author(s):  
Trung-Nghia Phung ◽  
Huy-Khoi Do ◽  
Van-Tao Nguyen ◽  
Quang-Vinh Thai

The learning-based speech recovery approach using statistical spectral conversion has been used for some kind of distorted speech as alaryngeal speech and body-conducted speech (or bone-conducted speech). This approach attempts to recover clean speech (undistorted speech) from noisy speech (distorted speech) by converting the statistical models of noisy speech into that of clean speech without the prior knowledge on characteristics and distributions of noise source. Presently, this approach has still not attracted many researchers to apply in general noisy speech enhancement because of some major problems: those are the difficulties of noise adaptation and the lack of noise robust synthesizable features in different noisy environments. In this paper, we adopted the methods of state-of-the-art voice conversions and speaker adaptation in speech recognition to the proposed speech recovery approach applied in different kinds of noisy environment, especially in adverse environments with joint compensation of additive and convolutive noises. We proposed to use the decorrelated wavelet packet coefficients as a low-dimensional robust synthesizable feature under noisy environments. We also proposed a noise adaptation for speech recovery with the eigennoise similar to the eigenvoice in voice conversion. The experimental results showed that the proposed approach highly outperformed traditional nonlearning-based approaches.


2010 ◽  
Vol E93-C (11) ◽  
pp. 1583-1589
Author(s):  
Fumirou MATSUKI ◽  
Kazuyuki HASHIMOTO ◽  
Keiichi SANO ◽  
Fu-Yuan HSUEH ◽  
Ramesh KAKKAD ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document