missing feature theory
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 0)

H-INDEX

7
(FIVE YEARS 0)

2017 ◽  
Vol 29 (1) ◽  
pp. 105-113 ◽  
Author(s):  
Kazuhiro Nakadai ◽  
◽  
Tomoaki Koiwa ◽  

[abstFig src='/00290001/10.jpg' width='300' text='System architecture of AVSR based on missing feature theory and P-V grouping' ] Audio-visual speech recognition (AVSR) is a promising approach to improving the noise robustness of speech recognition in the real world. For AVSR, the auditory and visual units are the phoneme and viseme, respectively. However, these are often misclassified in the real world because of noisy input. To solve this problem, we propose two psychologically-inspired approaches. One is audio-visual integration based on missing feature theory (MFT) to cope with missing or unreliable audio and visual features for recognition. The other is phoneme and viseme grouping based on coarse-to-fine recognition. Preliminary experiments show that these two approaches are effective for audio-visual speech recognition. Integration based on MFT with an appropriate weight improves the recognition performance by −5 dB. This is the case even in a noisy environment, in which most speech recognition systems do not work properly. Phoneme and viseme grouping further improved the AVSR performance, particularly at a low signal-to-noise ratio.**This work is an extension of our publication “Tomoaki Koiwa et al.: Coarse speech recognition by audio-visual integration based on missing feature theory, IROS 2007, pp.1751-1756, 2007.”


2011 ◽  
Vol 57 (3) ◽  
pp. 1245-1250 ◽  
Author(s):  
Shin-cheol Lim ◽  
Sei-jin Jang ◽  
Soek-pil Lee ◽  
Moo Kim

2010 ◽  
Vol 1 (1) ◽  
Author(s):  
Toru Takahashi ◽  
Kazuhiro Nakadai ◽  
Kazunori Komatani ◽  
Tetsuya Ogata ◽  
Hiroshi G. Okuno

AbstractThis paper describes an improvement in automatic speech recognition (ASR) for robot audition by introducing Missing Feature Theory (MFT) based on soft missing feature masks (MFM) to realize natural human-robot interaction. In an everyday environment, a robot’s microphones capture various sounds besides the user’s utterances. Although sound-source separation is an effective way to enhance the user’s utterances, it inevitably produces errors due to reflection and reverberation. MFT is able to cope with these errors. First, MFMs are generated based on the reliability of time-frequency components. Then ASR weighs the time-frequency components according to the MFMs. We propose a new method to automatically generate soft MFMs, consisting of continuous values from 0 to 1 based on a sigmoid function. The proposed MFM generation was implemented for HRP-2 using HARK, our open-sourced robot audition software. Preliminary results show that the soft MFM outperformed a hard (binary) MFM in recognizing three simultaneous utterances. In a human-robot interaction task, the interval limitations between two adjacent loudspeakers were reduced from 60 degrees to 30 degrees by using soft MFMs.


Sign in / Sign up

Export Citation Format

Share Document