Application of the Speech Enhancement Technology in Flight Simulation System

2014 ◽  
Vol 556-562 ◽  
pp. 3408-3411
Author(s):  
Jing Xin Xiao ◽  
Lin Deng ◽  
Yun He ◽  
You Yi Li

Speech signal can be effect by the external interruption in the process of flight simulation, which results in the deteriorated speech processing performance in flight simulation system. To solve the problem, speech enhancement module has been developed using the spectral subtraction algorithm based on the flight simulation system developed by the research team. The module will improve the recognition rates and provide advanced anti-jam capabilities for the speech processing performance of the flight simulation system. The feasibility and validity of this speech enhancement module is validated by processing the interrupted speech signal. The work in this paper has laid foundation for the application of the speech enhancement technology in flight simulation system.

2011 ◽  
Vol 267 ◽  
pp. 762-767
Author(s):  
Ji Xiang Lu ◽  
Ping Wang ◽  
Hong Zhong Shi ◽  
Xin Wang

As the primary research area of the Multimoda1 Human-computer Interaction, Voice Interaction mainly involves extraction and identification of the natural speech signal, where the former provides the reliable signal sources, which are analyzed by the latter. The multichannel speech enhancement technology is studied in this paper, aiming at the Voice Interactive. The simulated results show the effectiveness and superiority of the improved algorithm proposed in the paper.


This paper introduces technology to improve sound quality, which serves the needs of media and entertainment. Major challenging problem in the speech processing applications like mobile phones, hands-free phones, car communication, teleconference systems, hearing aids, voice coders, automatic speech recognition and forensics etc., is to eliminate the background noise. Speech enhancement algorithms are widely used for these applications in order to remove the noise from degraded speech in the noisy environment. Hence, the conventional noise reduction methods introduce more residual noise and speech distortion. So, it has been found that the noise reduction process is more effective to improve the speech quality but it affects the intelligibility of the clean speech signal. In this paper, we introduce a new model of coherence-based noise reduction method for the complex noise environment in which a target speech coexists with a coherent noise around. From the coherence model, the information of speech presence probability is added to better track noise variation accurately; and during the speech presence and speech absent period, adaptive coherence-based method is adjusted. The performance of suggested method is evaluated in condition of diffuse and real street noise, and it improves the speech signal quality less speech distortion and residual noise.


2021 ◽  
pp. 2250008
Author(s):  
N. Radha ◽  
R. B. Jananie ◽  
A. Anto Silviya

Speech processing is an important application area of digital signal processing that helps examine and analyze the speech signal. In this processing, speech enhancement is an essential factor because it improves the quality of the signal that helps resolve the communication challenges. Different speech enhancement algorithms are utilized in the research field, but limited processing capabilities, maximum microphone distance, and voice-first I.O. interfaces create the computation complexity. In this paper, speech enhancement is done in two steps. In an initial step, spectral subtraction method is applied to LJ Speech dataset. In the first stage, noise spectrum is estimated during pauses and it is subtracted from the noisy speech signal to obtain the clean speech signal. However, spectral subtraction method still introduces artificial noise and narrow-band noise in the spectrum. Hence, artificial bandwidth expansion with a deep shallow convolution neural network (ABE-DSCNN) is implemented as a second stage in the paper. Further, developed system is compared with conventional enhancement approaches such as deep learning network (DNN), neural beam forming (NB) and generative adversarial network (GAN). The experimental results show that an ABS-DSCNN provides 4% increase of PSEQ and error rate improved by 40% to 56% with respect to the other existing algorithms for 1000 speech samples. Hence, the paper concludes that ABE-DSCNN approach effectively improves the speech quality.


2021 ◽  
pp. 2150022
Author(s):  
Caio Cesar Enside de Abreu ◽  
Marco Aparecido Queiroz Duarte ◽  
Bruno Rodrigues de Oliveira ◽  
Jozue Vieira Filho ◽  
Francisco Villarreal

Speech processing systems are very important in different applications involving speech and voice quality such as automatic speech recognition, forensic phonetics and speech enhancement, among others. In most of them, the acoustic environmental noise is added to the original signal, decreasing the signal-to-noise ratio (SNR) and the speech quality by consequence. Therefore, estimating noise is one of the most important steps in speech processing whether to reduce it before processing or to design robust algorithms. In this paper, a new approach to estimate noise from speech signals is presented and its effectiveness is tested in the speech enhancement context. For this purpose, partial least squares (PLS) regression is used to model the acoustic environment (AE) and a Wiener filter based on a priori SNR estimation is implemented to evaluate the proposed approach. Six noise types are used to create seven acoustically modeled noises. The basic idea is to consider the AE model to identify the noise type and estimate its power to be used in a speech processing system. Speech signals processed using the proposed method and classical noise estimators are evaluated through objective measures. Results show that the proposed method produces better speech quality than state-of-the-art noise estimators, enabling it to be used in real-time applications in the field of robotic, telecommunications and acoustic analysis.


Author(s):  
Siriporn Dachasilaruk ◽  
Niphat Jantharamin ◽  
Apichai Rungruang

Cochlear implant (CI) listeners encounter difficulties in communicating with other persons in noisy listening environments. However, most CI research has been carried out using the English language. In this study, single-channel speech enhancement (SE) strategies as a pre-processing approach for the CI system were investigated in terms of Thai speech intelligibility improvement. Two SE algorithms, namely multi-band spectral subtraction (MBSS) and Weiner filter (WF) algorithms, were evaluated. Speech signals consisting of monosyllabic and bisyllabic Thai words were degraded by speech-shaped noise and babble noise at SNR levels of 0, 5, and 10 dB. Then the noisy words were enhanced using SE algorithms. The enhanced words were fed into the CI system to synthesize vocoded speech. The vocoded speech was presented to twenty normal-hearing listeners. The results indicated that speech intelligibility was marginally improved by the MBSS algorithm and significantly improved by the WF algorithm in some conditions. The enhanced bisyllabic words showed a noticeably higher intelligibility improvement than the enhanced monosyllabic words in all conditions, particularly in speech-shaped noise. Such outcomes may be beneficial to Thai-speaking CI listeners.


Sign in / Sign up

Export Citation Format

Share Document