A Vision-Based Framework for Spotting and Segmentation of Gesture-Based Assamese Characters Written in the Air

2021 ◽  
Vol 14 (1) ◽  
pp. 70-91
Author(s):  
Ananya Choudhury ◽  
Kandarpa Kumar Sarma

The task of automatic gesture spotting and segmentation is challenging for determining the meaningful gesture patterns from continuous gesture-based character sequences. This paper proposes a vision-based automatic method that handles hand gesture spotting and segmentation of gestural characters embedded in a continuous character stream simultaneously, by employing a hybrid geometrical and statistical feature set. This framework shall form an important constituent of gesture-based character recognition (GBCR) systems, which has gained tremendous demand lately as assistive aids for overcoming the restraints faced by people with physical impairments. The performance of the proposed system is validated by taking into account the vowels and numerals of Assamese vocabulary. Another attribute to this proposed system is the implementation of an effective hand segmentation module, which enables it to tackle complex background settings.

Author(s):  
Ananya Choudhury ◽  
Kandarpa Kumar Sarma

In the present scenario, around 15% of the world's population experience some form of disability. So, there has been an enormous increase in the demand for assistive techniques for overcoming the restraints faced by people with physical impairments. More recently, gesture-based character recognition (GBCR) has emerged as an assistive tool of immense importance, especially for facilitating the needs of persons with special necessities. Such GBCR systems serve as a powerful mediator for communication among people having hearing and speech impairments. They can also serve as a rehabilitative aid for people with motor disabilities who cannot write with pen on paper, or face difficulty in using common human-machine interactive (HMI) devices. This chapter provides a glimpse of disability prevalence around the globe and particularly in India, emphasizes the importance of learning-based GBCR systems in practical education of differently-abled children, and highlights the novel research contributions made in this field.


2020 ◽  
Vol 6 (8) ◽  
pp. 73 ◽  
Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3226
Author(s):  
Radu Mirsu ◽  
Georgiana Simion ◽  
Catalin Daniel Caleanu ◽  
Ioana Monica Pop-Calimanu

Gesture recognition is an intensively researched area for several reasons. One of the most important reasons is because of this technology’s numerous application in various domains (e.g., robotics, games, medicine, automotive, etc.) Additionally, the introduction of three-dimensional (3D) image acquisition techniques (e.g., stereovision, projected-light, time-of-flight, etc.) overcomes the limitations of traditional two-dimensional (2D) approaches. Combined with the larger availability of 3D sensors (e.g., Microsoft Kinect, Intel RealSense, photonic mixer device (PMD), CamCube, etc.), recent interest in this domain has sparked. Moreover, in many computer vision tasks, the traditional statistic top approaches were outperformed by deep neural network-based solutions. In view of these considerations, we proposed a deep neural network solution by employing PointNet architecture for the problem of hand gesture recognition using depth data produced by a time of flight (ToF) sensor. We created a custom hand gesture dataset, then proposed a multistage hand segmentation by designing filtering, clustering, and finding the hand in the volume of interest and hand-forearm segmentation. For comparison purpose, two equivalent datasets were tested: a 3D point cloud dataset and a 2D image dataset, both obtained from the same stream. Besides the advantages of the 3D technology, the accuracy of the 3D method using PointNet is proven to outperform the 2D method in all circumstances, even the 2D method that employs a deep neural network.


2020 ◽  
Vol 17 (4) ◽  
pp. 1764-1769
Author(s):  
S. Gobhinath ◽  
T. Vignesh ◽  
R. Pavankumar ◽  
R. Kishore ◽  
K. S. Koushik

This paper presents about an overview on several methods of segmentation techniques for hand gesture recognition. Hand gesture recognition has evolved tremendously in the recent years because of its ability to interact with machine. Mankind tries to incorporate human gestures into modern technologies like touching movement on screen, virtual reality gaming and sign language prediction. This research aims towards employed on hand gesture recognition for sign language interpretation as a human computer interaction application. Sign Language which uses transmits the sign patterns to convey meaning by hand shapes, orientation and movements to fluently express their thoughts with other person and is normally used by the physically challenged people who cannot speak or hear. Automatic Sign Language which requires robust and accurate techniques for identifying hand signs or a sequence of produced gesture to help interpret their correct meaning. Hand segmentation algorithm where segmentation using different hand detection schemes with required morphological processing. There are many methods which can be used to acquire the respective results depending on its advantage.


Author(s):  
DHARANI MAZUMDAR ◽  
ANJAN KUMAR TALUKDAR ◽  
Kandarpa Kumar Sarma

Hand gesture recognition system can be used for human-computer interaction (HCI). Proper hand segmentation from the background and other body parts of the video is the primary requirement for the design of a hand-gesture based application. These video frames can be captured from a low cost webcam (camera) for use in a vision based gesture recognition technique. This paper discusses about the continuous hand gesture recognition. The aim of this paper is to report a robust and efficient hand segmentation algorithm where a new method, wearing glove on the hand is utilized. After that a new idea called “Finger-Pen”, is developed by segmenting only one finger from the hand for proper tracking. In this technique only a finger tip is segmented in spite of the full hand part. Hence this technique allows the hand (excepting the segmented finger tip) to move freely during the tracking time also. Problems such as skin colour detection, complexity from large numbers of people in front of the camera, complex background removal and variable lighting condition are found to be efficiently handled by the system. Noise present in the segmented image due to dynamic background can be removed with the help of this adaptive technique which is found to be effective for the application conceived.


Sign in / Sign up

Export Citation Format

Share Document