Voice Control Intelligent Wheelchair Movement Using CNNs

Author(s):  
Mohammad Shahrul Izham Sharifuddin ◽  
Sharifalillah Nordin ◽  
Azliza Mohd Ali
2015 ◽  
Vol 733 ◽  
pp. 740-744 ◽  
Author(s):  
Yi Zhang ◽  
Shi Chuan Xu

Compared with the traditional electric-powered wheelchair, people are paying more attention on intelligent wheelchair. While the traditional intelligent wheelchair relays on separate designed control system, it is not good for general use. In that case, ROS provides an easy to use framework for rapid system development so that the researchers can develop various software packages to meet their needs, and we can also call each other packages without considering the compatibility problems. In this paper, we present a ROS (Robot Operating System) based intelligent wheelchair with the function of voice-control navigation. Compared with the traditional navigation, the voice-control navigation is more human. Obviously, ROS increases the versatility of system and reduces the cost. In order to prove the advancement and feasibility of this developed system, some experimental results are given in the paper.


Author(s):  
Mohammad Shahrul Izham Sharifuddin ◽  
Sharifalillah Nordin ◽  
Azliza Mohd Ali

In this paper, we develop an intelligent wheelchair using CNNs and SVM voice recognition methods. The data is collected from Google and some of them are self-recorded. There are four types of data to be recognized which are go, left, right, and stop. Voice data are extracted using MFCC feature extraction technique. CNNs and SVM are then used to classify and recognize the voice data. The motor driver is embedded in Raspberry PI 3B+  to control the movement of the wheelchair prototype. CNNs produced higher accuracy i.e. 95.30% compared to SVM which is only 72.39%. On the other hand, SVM only took 8.21 seconds while CNNs took 250.03 seconds to execute. Therefore, CNNs produce better result because noise are filtered in the feature extraction layer before classified in the classification layer. However, CNNs took longer time due to the complexity of the networks and the less complexity implementation in SVM give shorter processing time.


Author(s):  
I S Balabanova ◽  
S S Kostadinova ◽  
V I Markova ◽  
S M Sadinov ◽  
G I Georgiev

2021 ◽  
Vol 1797 (1) ◽  
pp. 012019
Author(s):  
Sudarshan Nath ◽  
Sourav Debnath ◽  
Nilay Mukherjee ◽  
Swagata Mukherjee ◽  
Amaresh Chakraborty ◽  
...  

Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


2016 ◽  
Vol 25 (2) ◽  
pp. 107-121 ◽  
Author(s):  
Malek Njah ◽  
Mohamed Jallouli

AbstractThe electric wheelchair gives more autonomy and facilitates movement for handicapped persons in the home or in a hospital. Among the problems faced by these persons are collision with obstacles, the doorway, the navigation in a hallway, and reaching the desired place. These problems are due to the difficult manipulation of an electric wheelchair, especially for persons with severe disabilities. Hence, we tried to add more functionality to the standard wheelchair in order to increase movement range, security, environment access, and comfort. In this context, we have developed an automatic control method for indoor navigation. The proposed control system is mounted on the electric wheelchair for the handicapped, developed in the research laboratory CEMLab (Control and Energy Management Laboratory-Tunisia). The proposed method is based on two fuzzy controllers that ensure target achievement and obstacle avoidance. Furthermore, an extended Kalman filter was used to provide precise measurements and more effective data fusion localization. In this paper, we present the simulation and experimental results of the wheelchair navigation system.


Sign in / Sign up

Export Citation Format

Share Document