Design and Implementation of the Voice Command Recognition and the Sound Source Localization System for Human–Robot Interaction

Robotica ◽  
2021 ◽  
pp. 1-12
Author(s):  
M. H. Korayem ◽  
S. Azargoshasb ◽  
A. H. Korayem ◽  
Sh. Tabibian

SUMMARY Human–robot interaction (HRI) is becoming more important nowadays. In this paper, a low-cost communication system for HRI is designed and implemented on the Scout robot and a robotic face. A hidden Markov model-based voice command detection system is proposed and a non-native database has been collected by Persian speakers, which contains 10 desired English commands. The experimental results confirm that the proposed system is capable to recognize the voice commands, and properly performs the task or expresses the right answer. Comparing the system with a trained system on the Julius native database shows a better true detection (about 10%).

Generally, in hospitals the dental chair can be operated forward/backward or upward/downward according to the treatment for the patients which is operated by human. Sometimes the chair will not function properly due to piston rust and over weighted patient and the dentist may have pain in the legs due to continuous operation of the chair. To overcome these issues, planning to design a voice recognition dental chair for the doctors in hospitals. This project describes the design of a smart, motorized, voice controlled dental chair. The voice command is given by the dentist/human, sensor recognizes the voice and sends the command to the Arduino. This voice command is converted to string and it is responsible for movement of chair. The intelligent dental chair is designed in such a way that it can be controlled easily by the doctor and has an advantage is the low cost design. This system was designed and developed to avoid wasting the energy and time of the doctor


2017 ◽  
Vol 2017 ◽  
pp. 1-16
Author(s):  
Enrique Fernández-Rodicio ◽  
Víctor González-Pacheco ◽  
José Carlos Castillo ◽  
Álvaro Castro-González ◽  
María Malfaz ◽  
...  

Projectors have become a widespread tool to share information in Human-Robot Interaction with large groups of people in a comfortable way. Finding a suitable vertical surface becomes a problem when the projector changes positions when a mobile robot is looking for suitable surfaces to project. Two problems must be addressed to achieve a correct undistorted image: (i) finding the biggest suitable surface free from obstacles and (ii) adapting the output image to correct the distortion due to the angle between the robot and a nonorthogonal surface. We propose a RANSAC-based method that detects a vertical plane inside a point cloud. Then, inside this plane, we apply a rectangle-fitting algorithm over the region in which the projector can work. Finally, the algorithm checks the surface looking for imperfections and occlusions and transforms the original image using a homography matrix to display it over the area detected. The proposed solution can detect projection areas in real-time using a single Kinect camera, which makes it suitable for applications where a robot interacts with other people in unknown environments. Our Projection Surfaces Detector and the Image Correction module allow a mobile robot to find the right surface and display images without deformation, improving its ability to interact with people.


Sensor Review ◽  
2015 ◽  
Vol 35 (3) ◽  
pp. 244-250 ◽  
Author(s):  
Pedro Neto ◽  
Nuno Mendes ◽  
A. Paulo Moreira

Purpose – The purpose of this paper is to achieve reliable estimation of yaw angles by fusing data from low-cost inertial and magnetic sensing. Design/methodology/approach – In this paper, yaw angle is estimated by fusing inertial and magnetic sensing from a digital compass and a gyroscope, respectively. A Kalman filter estimates the error produced by the gyroscope. Findings – Drift effect produced by the gyroscope is significantly reduced and, at the same time, the system has the ability to react quickly to orientation changes. The system combines the best of each sensor, the stability of the magnetic sensor and the fast response of the inertial sensor. Research limitations/implications – The system does not present a stable behavior in the presence of large vibrations. Considerable calibration efforts are needed. Practical implications – Today, most of human–robot interaction technologies need to have the ability to estimate orientation, especially yaw angle, from small-sized and low-cost sensors. Originality/value – Existing methods for inertial and magnetic sensor fusion are combined to achieve reliable estimation of yaw angle. Experimental tests in a human–robot interaction scenario show the performance of the system.


2019 ◽  
Vol 37 (2) ◽  
pp. 26-42
Author(s):  
B. Kommey ◽  
E. O. Addo ◽  
K. A. Adjei

Location of appropriate seats in seating areas of theaters remains a significant challenge that patrons of these enterprises face. There is therefore, the need for seat occupancy monitoring system to provide readily accessible seat occupancy information to clients and management of these halls. This paper presents the design and implementation of a low cost seat occupancy detection and display system which is capable of monitoring seat occupancy in halls efficiently.  The system uses capacitive seat sensors which is designed based on the loading mode technology. It detects the presence of a human occupant using a single electrode. Occupancy data is relayed to a WiFi-enabled microcontroller unit which processes the data and wirelessly transfers the processed data to a central base station over a local area network for graphical and numerical display. Commands are also transferred from the base station to the microcontroller units when needed. Theoretical and empirical results show that the system is able to achieve seat occupancy monitoring accurately, neatly and cost effectively.Keywords: Capacitive sensing, seat occupancy, sensor cluster, microstrip transmission line, Wi-Fi 


2021 ◽  
Vol 8 ◽  
Author(s):  
Hua Minh Tuan ◽  
Filippo Sanfilippo ◽  
Nguyen Vinh Hao

Collaborative robots (or cobots) are robots that can safely work together or interact with humans in a common space. They gradually become noticeable nowadays. Compliant actuators are very relevant for the design of cobots. This type of actuation scheme mitigates the damage caused by unexpected collision. Therefore, elastic joints are considered to outperform rigid joints when operating in a dynamic environment. However, most of the available elastic robots are relatively costly or difficult to construct. To give researchers a solution that is inexpensive, easily customisable, and fast to fabricate, a newly-designed low-cost, and open-source design of an elastic joint is presented in this work. Based on the newly design elastic joint, a highly-compliant multi-purpose 2-DOF robot arm for safe human-robot interaction is also introduced. The mechanical design of the robot and a position control algorithm are presented. The mechanical prototype is 3D-printed. The control algorithm is a two loops control scheme. In particular, the inner control loop is designed as a model reference adaptive controller (MRAC) to deal with uncertainties in the system parameters, while the outer control loop utilises a fuzzy proportional-integral controller to reduce the effect of external disturbances on the load. The control algorithm is first validated in simulation. Then the effectiveness of the controller is also proven by experiments on the mechanical prototype.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180036 ◽  
Author(s):  
Cesco Willemse ◽  
Agnieszka Wykowska

Initiating joint attention by leading someone's gaze is a rewarding experience which facilitates social interaction. Here, we investigate this experience of leading an agent's gaze while applying a more realistic paradigm than traditional screen-based experiments. We used an embodied robot as our main stimulus and recorded participants' eye movements. Participants sat opposite a robot that had either of two ‘identities’—‘Jimmy’ or ‘Dylan’. Participants were asked to look at either of two objects presented on screens to the left and the right of the robot. Jimmy then looked at the same object in 80% of the trials and at the other object in the remaining 20%. For Dylan, this proportion was reversed. Upon fixating on the object of choice, participants were asked to look back at the robot's face. We found that return-to-face saccades were conducted earlier towards Jimmy when he followed the gaze compared with when he did not. For Dylan, there was no such effect. Additional measures indicated that our participants also preferred Jimmy and liked him better. This study demonstrates (a) the potential of technological advances to examine joint attention where ecological validity meets experimental control, and (b) that social reorienting is enhanced when we initiate joint attention. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


Author(s):  
Xiaoran Fan ◽  
Daewon Lee ◽  
Lawrence Jackel ◽  
Richard Howard ◽  
Daniel Lee ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document