Computer control in human-machine interaction systems by hand movements

Author(s):  
Hoa Tat Thang

Computers have become popular in recent years. The forms of human-computer interaction are increasingly diverse. In many cases, controlling the computer is not only through the mouse and keyboard, but humans must control the computer through body language and representation. For some people with physical disabilities, controlling the computer through hand movements is essential to help them interact with the computer. The field of simulation also needs these interactive applications. This paper studies a solution to build a hand tracking and gesture recognition system that allows cursor movement and corresponding actions with mouse and keyboard. The research team confirms that the system works stably, accurately and can control the computer instead of a conventional mouse and keyboard through the implementation and evaluation.

Author(s):  
Padmapriya K.C. ◽  
Leelavathy V. ◽  
Angelin Gladston

The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.


Author(s):  
Muhammad Aminur Rahaman ◽  
Md Jahidul Islam ◽  
Sumaiya Kabir ◽  
Ayesha Khatun

Currently, thousands of people are suffering from paralysis. They have difficulties with speaking and walking. So we’ve developed a new kind of robot that can help those people who can’t walk or speak. By utilizing this robot (hand gloves or wheelchair handle) and gesture-based regulators, people with physical disabilities will improve their quality of life. The robot of the proposal has two components, one is the controller of the motion, and the other is the Robotic Wheelchair (RW). Where one can easily interact with the robotic-base wheelchair-using sensor-based hand gesture. With this human-robot interaction, a patient can quite easily control the robot and can move freely. In addition, the required patients may use gestures (hand gloves or wheelchair handle) to express their needs. Furthermore, we will reduce the effort to regulate the RW and hand movements with this device, that’s really difficult for disabled or dumb people. Our device can run with approximately 94% accuracy and very minimal delay. GUB JOURNAL OF SCIENCE AND ENGINEERING, Vol 7, Dec 2020 P 85-93


2019 ◽  
Vol 8 (2) ◽  
pp. 1768-1772

Computer nowadays is a must-have tool for most people. However, it is not a tool to be used by people with physical disabilities, especially the ones lacking an arm or two. The goal of this paper is to introduce a system that will help computer users perform tasks and make use of computer features and functions despite their physical limitations through the Speech Recognition System (SRS) in the English language. Ideally, this aims to provide users with an alternative way of interacting with the computer system and navigate through its functions using SRS in place of peripheral devices. It can be used to navigate through menus, open and manage applications, open certain websites, browse the internet, and type words, letters, numbers, and symbols using the dictation mode. For the testing phase, the following test cases were used: Functionality Testing, Stress Testing, and Compatibility. The testing phase yielded a result of 94.79% for the functional, 100% for stress, and 100% for compatibility, effectively ensuring that the software is working as intended. The evaluation results conforming to the standards of the ISO/IEC 9126-1 yielded a mean of 3.57 with a standard deviation of 0.52 interpreted as ‘Highly Acceptable', which means that the software can be used as an effective alternative to peripheral devices and can even be used to complement its usage.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5182
Author(s):  
Carmen López-Casado ◽  
Enrique Bauzano ◽  
Irene Rivas-Blanco ◽  
Carlos J. Pérez-del-Pulgar ◽  
Víctor F. Muñoz

Minimally invasive surgery (MIS) techniques are growing in quantity and complexity to cover a wider range of interventions. More specifically, hand-assisted laparoscopic surgery (HALS) involves the use of one surgeon’s hand inside the patient whereas the other one manages a single laparoscopic tool. In this scenario, those surgical procedures performed with an additional tool require the aid of an assistant. Furthermore, in the case of a human–robot assistant pairing a fluid communication is mandatory. This human–machine interaction must combine both explicit orders and implicit information from the surgical gestures. In this context, this paper focuses on the development of a hand gesture recognition system for HALS. The recognition is based on a hidden Markov model (HMM) algorithm with an improved automated training step, which can also learn during the online surgical procedure by means of a reinforcement learning process.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8202
Author(s):  
Alberto Tellaeche Iglesias ◽  
Ignacio Fidalgo Astorquia ◽  
Juan Ignacio Vázquez Gómez ◽  
Surajit Saikia

The use of gestures is one of the main forms of human machine interaction (HMI) in many fields, from advanced robotics industrial setups, to multimedia devices at home. Almost every gesture detection system uses computer vision as the fundamental technology, with the already well-known problems of image processing: changes in lighting conditions, partial occlusions, variations in color, among others. To solve all these potential issues, deep learning techniques have been proven to be very effective. This research proposes a hand gesture recognition system based on convolutional neural networks and color images that is robust against environmental variations, has a real time performance in embedded systems, and solves the principal problems presented in the previous paragraph. A new CNN network has been specifically designed with a small architecture in terms of number of layers and total number of neurons to be used in computationally limited devices. The obtained results achieve a percentage of success of 96.92% on average, a better score than those obtained by previous algorithms discussed in the state of the art.


Sign in / Sign up

Export Citation Format

Share Document