Building Health Monitoring Using Computational Auditory Scene Analysis

Author(s):  
Mitsuru Kawamoto ◽  
Takuji Hamamoto
2014 ◽  
Vol 614 ◽  
pp. 363-366
Author(s):  
Yi Jiang ◽  
Yuan Yuan Zu ◽  
Ying Ze Wang

A K-means based unsupervised approach to close-talk speech enhancement is proposed in this paper. With the frame work of computational auditory scene analysis (CASA), the dual-microphone energy difference (DMED) is used as the cue to classify the noise domain time-frequency (T-F) units and target speech domain units. A ratio mask is used to separate the target speech and noise. Experiment results show the robust performance of the proposed algorithm than the Wiener filtering algorithm.


2005 ◽  
Vol 17 (9) ◽  
pp. 1875-1902 ◽  
Author(s):  
Simon Haykin ◽  
Zhe Chen

This review presents an overview of a challenging problem in auditory perception, the cocktail party phenomenon, the delineation of which goes back to a classic paper by Cherry in 1953. In this review, we address the following issues: (1) human auditory scene analysis, which is a general process carried out by the auditory system of a human listener; (2) insight into auditory perception, which is derived from Marr's vision theory; (3) computational auditory scene analysis, which focuses on specific approaches aimed at solving the machine cocktail party problem; (4) active audition, the proposal for which is motivated by analogy with active vision, and (5) discussion of brain theory and independent component analysis, on the one hand, and correlative neural firing, on the other.


Sign in / Sign up

Export Citation Format

Share Document