Computer Control with Hand Gestures Using Computer Vision

2020 ◽  
Author(s):  
Nirmala J S ◽  
Ajeet Kumar ◽  
Adith Jose E A ◽  
Kapil Kumar ◽  
Abhishek R Malvadkar
2020 ◽  
Vol 67 (1) ◽  
pp. 133-141
Author(s):  
Dmitriy O. Khort ◽  
Aleksei I. Kutyrev ◽  
Igor G. Smirnov ◽  
Rostislav A. Filippov ◽  
Roman V. Vershinin

Technological capabilities of agricultural units cannot be optimally used without extensive automation of production processes and the use of advanced computer control systems. (Research purpose) To develop an algorithm for recognizing the coordinates of the location and ripeness of garden strawberries in different lighting conditions and describe the technological process of its harvesting in field conditions using a robotic actuator mounted on a self-propelled platform. (Materials and methods) The authors have developed a self-propelled platform with an automatic actuator for harvesting garden strawberry, which includes an actuator with six degrees of freedom, a co-axial gripper, mg966r servos, a PCA9685 controller, a Logitech HD C270 computer vision camera, a single-board Raspberry Pi 3 Model B+ computer, VL53L0X laser sensors, a SZBK07 300W voltage regulator, a Hubsan X4 Pro H109S Li-polymer battery. (Results and discussion) Using the Python programming language 3.7.2, the authors have developed a control algorithm for the automatic actuator, including operations to determine the X and Y coordinates of berries, their degree of maturity, as well as to calculate the distance to berries. It has been found that the effectiveness of detecting berries, their area and boundaries with a camera and the OpenCV library at the illumination of 300 Lux reaches 94.6 percent’s. With an increase in the robotic platform speed to 1.5 kilometre per hour and at the illumination of 300 Lux, the average area of the recognized berries decreased by 9 percent’s to 95.1 square centimeter, at the illumination of 200 Lux, the area of recognized berries decreased by 17.8 percent’s to 88 square centimeter, and at the illumination of 100 Lux, the area of recognized berries decreased by 36.4 percent’s to 76 square centimeter as compared to the real area of berries. (Conclusions) The authors have provided rationale for the technological process and developed an algorithm for harvesting garden strawberry using a robotic actuator mounted on a self-propelled platform. It has been proved that lighting conditions have a significant impact on the determination of the area, boundaries and ripeness of berries using a computer vision camera.


Author(s):  
Jonas Robin ◽  
Mehul Rajesh Soni ◽  
Rishabh Rajkumar Dubey ◽  
Nimish Arvind Datkhile ◽  
Jyoti Kolap

2020 ◽  
Vol 6 (8) ◽  
pp. 73 ◽  
Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.


2016 ◽  
Vol 11 (1) ◽  
pp. 30-35
Author(s):  
Manoj Acharya ◽  
Dibakar Raj Pant

This paper proposes a method to recognize static hand gestures in an image or video where a person is performing Nepali Sign Language (NSL) and translate it to words and sentences. The classification is carried out using Neural Network where contour of the hand is used as the feature. The work is verified successfully for NSL recognition using signer dependency analysis. Journal of the Institute of Engineering, 2015, 11(1): 30-35


2014 ◽  
Vol 14 (01n02) ◽  
pp. 1450006 ◽  
Author(s):  
Mahmood Jasim ◽  
Tao Zhang ◽  
Md. Hasanuzzaman

This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.


Author(s):  
Seema Rawat ◽  
Praveen Kumar ◽  
Ishita Singh ◽  
Shourya Banerjee ◽  
Shabana Urooj ◽  
...  

Human-Computer Interaction (HCI) interfaces need unambiguous instructions in the form of mouse clicks or keyboard taps from the user and thus gets complex. To simplify this monotonous task, a real-time hand gesture recognition method using computer vision, image, and video processing techniques has been proposed. Controlling infections has turned out to be the major concern of the healthcare environment. Several input devices such as keyboards, mouse, touch screens can be considered as a breeding ground for various micro pathogens and bacteria. Direct use of hands as an input device is an innovative method for providing natural HCI ensuring minimal physical contact with the devices i.e., less transmission of bacteria and thus can prevent cross infections. Convolutional Neural Network (CNN) has been used for object detection and classification. CNN architecture for 3d object recognition has been proposed which consists of two models: 1) A detector, a CNN architecture for detection of gestures; and 2) A classifier, a CNN for classification of the detected gestures. By using dynamic hand gesture recognition to interact with the system, the interactions can be increased with the help of multidimensional use of hand gestures as compared to other input methods. The dynamic hand gesture recognition method focuses to replace the mouse for interaction with the virtual objects. This work centralises the efforts of implementing a method that employs computer vision algorithms and gesture recognition techniques for developing a low-cost interface device for interacting with objects in the virtual environment such as screens using hand gestures.


Author(s):  
K M Bilvika ◽  
Sneha B K ◽  
Sahana K M ◽  
Tejaswini S M Patil

In human-computer interaction or sign language interpretation, recognizing hand gestures and face detection become predominant in computer vision research. The primary goal of this proposed system is to create a system, which can identify hand gestures and facial detection to convey information for controlling media player. For those who are deaf and dumb sign language is a common, efficient and alternative way for talking, by using the hand and facial gestures we can easily understand them. Here hand and face are directly use as the input to the device for effective communication purpose of gesture identification there is no need of an intermediate medium.


Sign language recognition is important for natural and convenient communication between deaf community and hearing majority. Hand gestures are a form of nonverbal communication that makes up the bulk of the communication between mute individuals, as sign language constitutes largely of hand gestures. Research works based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on computer based sign language recognition approaches, their motivations, techniques, observed limitations and suggestion for improvement.


2015 ◽  
Author(s):  
Intidhar Jemel ◽  
Ridha Ejbali ◽  
Mourad Zaied

Sign in / Sign up

Export Citation Format

Share Document