Wheeled Robot Control with Hand Gesture based on Image Processing

Author(s):  
Theodore Bismo Waskito ◽  
Sony Sumaryo ◽  
Casi Setianingsih
2020 ◽  
Vol 6 (8) ◽  
pp. 73 ◽  
Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.


2016 ◽  
Vol 836 ◽  
pp. 37-41 ◽  
Author(s):  
Adlina Taufik Syamlan ◽  
Bambang Pramujati ◽  
Hendro Nurhadi

Robotics has lots of use in the industrial world and has lots of development since the industrial revolution, due to its qualities of high precision and accuracy. This paper is designed to display the qualities in a form of a writing robot. The aim of this study is to construct the system based on data gathered and to develop the control system based on the model. There are four aspects studied for this project, namely image processing, character recognition, image properties extraction and inverse kinematics. This paper served as discussion in modelling the robotic arm used for writing robot and generating theta for end effector position. Training data are generated through meshgrid, which is the fed through anfis.


2020 ◽  
Vol 173 ◽  
pp. 181-190
Author(s):  
Ashish Sharma ◽  
Anmol Mittal ◽  
Savitoj Singh ◽  
Vasudev Awatramani

The growth of technology has influenced development in various fields. Technology has helped people achieve their dreams over the past years. One such field that technology involves is aiding the hearing and speech impaired people. The obstruction between common individuals and individuals with hearing and language incapacities can be resolved by using the current technology to develop an environment such that the aforementioned easily communicate among one and other. ASL Interpreter aims to facilitate communication among the hearing and speech impaired individuals. This project mainly focuses on the development of software that can convert American Sign Language to Communicative English Language and vice-versa. This is accomplished via Image-Processing. The latter is a system that does a few activities on a picture, to acquire an improved picture or to extricate some valuable data from it. Image processing in this project is done by using MATLAB, software by MathWorks. The latter is programmed in a way that it captures the live image of the hand gesture. The captured gestures are put under the spotlight by being distinctively colored in contrast with the black background. The contrasted hand gesture will be delivered in the database as a binary equivalent of the location of each pixel and the interpreter would now link the binary value to its equivalent translation delivered in the database. This database shall be integrated into the mainframe image processing interface. The Image Processing toolbox, which is an inbuilt toolkit provided by MATLAB is used in the development of the software and Histogramic equivalents of the images are brought to the database and the extracted image will be converted to a histogram using the ‘imhist()’ function and would be compared with the same. The concluding phase of the project i.e. translation of speech to sign language is designed by matching the letter equivalent to the hand gesture in the database and displaying the result as images. The software will use a webcam to capture the hand gesture made by the user. This venture plans to facilitate the way toward learning gesture-based communication and supports hearing-impaired people to converse without trouble.


Sign in / Sign up

Export Citation Format

Share Document