Blind speech separation and recognition system for human robot interaction in reverberant environment

Author(s):  
Janghoon Cho ◽  
Hyunsin Park ◽  
Chang D. Yoo
Robotica ◽  
2014 ◽  
Vol 33 (1) ◽  
pp. 1-18 ◽  
Author(s):  
Alberto Poncela ◽  
Leticia Gallardo-Estrella

SUMMARYVerbal communication is the most natural way of human–robot interaction. Such an interaction is usually achieved by means of a human-robot interface (HRI). In this paper, a HRI is presented to teleoperate a robotic platform via the user's voice. Hence, a speech recognition system is necessary. In this work, a user-dependent acoustic model for Spanish speakers has been developed to teleoperate a robot with a set of commands. Experimental results have been successful, both in terms of a high recognition rate and the navigation of the robot under the control of the user's voice.


2014 ◽  
Vol 11 (04) ◽  
pp. 1442008 ◽  
Author(s):  
Kwang-Eun Ko ◽  
Kwee-Bo Sim

This paper is concerned with an imitative neural mechanism for recognizing behavior intention in human–robot interaction system. The intention recognition process is inspired by the neural mechanism of the mirror neurons in macaque monkey brain. We try to renovate a standard neural network with parametric biases as a reference model to imitate between sensory-motor data pair. The imitation process is primarily directed toward reproducing the goals of observed actions rather than the exact action trajectories. Several experiments and their results show that the proposed model allows to develop useful robotic application for human–robot interaction system application.


2013 ◽  
Vol 10 (01) ◽  
pp. 1350010 ◽  
Author(s):  
GINEVRA CASTELLANO ◽  
IOLANDA LEITE ◽  
ANDRÉ PEREIRA ◽  
CARLOS MARTINHO ◽  
ANA PAIVA ◽  
...  

Affect recognition for socially perceptive robots relies on representative data. While many of the existing affective corpora and databases contain posed and decontextualized affective expressions, affect resources for designing an affect recognition system in naturalistic human–robot interaction (HRI) must include context-rich expressions that emerge in the same scenario of the final application. In this paper, we propose a context-based approach to the collection and modeling of representative data for building an affect-sensitive robotic game companion. To illustrate our approach we present the key features of the Inter-ACT (INTEracting with Robots–Affect Context Task) corpus, an affective and contextually rich multimodal video corpus containing affective expressions of children playing chess with an iCat robot. We show how this corpus can be successfully used to train a context-sensitive affect recognition system (a valence detector) for a robotic game companion. Finally, we demonstrate how the integration of the affect recognition system in a modular platform for adaptive HRI makes the interaction with the robot more engaging.


Sign in / Sign up

Export Citation Format

Share Document