The application of bionic wavelet transform to speech signal processing in cochlear implants using neural network simulations

2002 ◽  
Vol 49 (11) ◽  
pp. 1299-1309 ◽  
Author(s):  
Jun Yao ◽  
Yuan-Ting Zhang
2012 ◽  
Vol 42 (2) ◽  
pp. 253-254
Author(s):  
Rolf Carlson ◽  
Björn Granström

Johan Liljencrants was a KTH oldtimer. His interests focused early on speech analysis and synthesis where in the 1960s he took a leading part in the development of analysis hardware, the OVE III speech synthesizer, and the introduction of computers in the Speech Transmission Laboratory. Later work shifted toward general speech signal processing, for instance in his thesis on the use of a reflection line synthesizer. His interests expanded to modelling the glottal system, parametrically as in the Liljencrants–Fant (LF) model of glottal waveshapes, as well as physically including glottal aerodynamics and mechanics.


Author(s):  
M. Yasin Pir ◽  
Mohamad Idris Wani

Speech forms a significant means of communication and the variation in pitch of a speech signal of a gender is commonly used to classify gender as male or female. In this study, we propose a system for gender classification from speech by combining hybrid model of 1-D Stationary Wavelet Transform (SWT) and artificial neural network. Features such as power spectral density, frequency, and amplitude of human voice samples were used to classify the gender. We use Daubechies wavelet transform at different levels for decomposition and reconstruction of the signal. The reconstructed signal is fed to artificial neural network using feed forward network for classification of gender. This study uses 400 voice samples of both the genders from Michigan University database which has been sampled at 16000 Hz. The experimental results show that the proposed method has more than 94% classification efficiency for both training and testing datasets.


2021 ◽  
Vol 21 (1) ◽  
pp. 19
Author(s):  
Asri Rizki Yuliani ◽  
M. Faizal Amri ◽  
Endang Suryawati ◽  
Ade Ramdan ◽  
Hilman Ferdinandus Pardede

Speech enhancement, which aims to recover the clean speech of the corrupted signal, plays an important role in the digital speech signal processing. According to the type of degradation and noise in the speech signal, approaches to speech enhancement vary. Thus, the research topic remains challenging in practice, specifically when dealing with highly non-stationary noise and reverberation. Recent advance of deep learning technologies has provided great support for the progress in speech enhancement research field. Deep learning has been known to outperform the statistical model used in the conventional speech enhancement. Hence, it deserves a dedicated survey. In this review, we described the advantages and disadvantages of recent deep learning approaches. We also discussed challenges and trends of this field. From the reviewed works, we concluded that the trend of the deep learning architecture has shifted from the standard deep neural network (DNN) to convolutional neural network (CNN), which can efficiently learn temporal information of speech signal, and generative adversarial network (GAN), that utilize two networks training.


2005 ◽  
Vol 15 (3-4) ◽  
pp. 217-222 ◽  
Author(s):  
D. Shi ◽  
F. Chen ◽  
G. S. Ng ◽  
J. Gao

Sign in / Sign up

Export Citation Format

Share Document