scholarly journals Smart hand gestures recognition using K-NN based algorithm for video annotation purposes

Author(s):  
Malek Zakarya Alksasbeh ◽  
Ahmad H AL-Omari ◽  
Bassam A. Y. Alqaralleh ◽  
Tamer Abukhalil ◽  
Anas Abukarki ◽  
...  

<span>Sign languages are the most basic and natural form of languages which were used even before the evolution of spoken languages. These sign languages were developed using various sign "gestures" that are made using hand palm. Such gestures are called "hand gestures". Hand gestures are being widely used as an international assistive communication method for deaf people and many life aspects such as sports, traffic control and religious acts. However, the meanings of hand gestures vary among different civilization cultures. Therefore, because of the importance of understanding the meanings of hand gestures, this study presents a procedure whichcan translate such gestures into an annotated explanation. The proposed system implements image and video processing which are recently conceived as one of the most important technologies. The system initially, analyzes a classroom video as an input, and then extracts the vocabulary of twenty gestures. Various methods have been applied sequentially, namely: motion detection, RGB to HSV conversion, and noise removing using labeling algorithms. The extraction of hand parameters is determined by a K-NN algorithm to eventually determine the hand gesture and, hence showing their meanings. To estimate the performance of the proposed method, an experiment using a hand gesture database is performed. The results showed that the suggested method has an average recognition rate of 97%. </span>

Author(s):  
Seema Rawat ◽  
Praveen Kumar ◽  
Ishita Singh ◽  
Shourya Banerjee ◽  
Shabana Urooj ◽  
...  

Human-Computer Interaction (HCI) interfaces need unambiguous instructions in the form of mouse clicks or keyboard taps from the user and thus gets complex. To simplify this monotonous task, a real-time hand gesture recognition method using computer vision, image, and video processing techniques has been proposed. Controlling infections has turned out to be the major concern of the healthcare environment. Several input devices such as keyboards, mouse, touch screens can be considered as a breeding ground for various micro pathogens and bacteria. Direct use of hands as an input device is an innovative method for providing natural HCI ensuring minimal physical contact with the devices i.e., less transmission of bacteria and thus can prevent cross infections. Convolutional Neural Network (CNN) has been used for object detection and classification. CNN architecture for 3d object recognition has been proposed which consists of two models: 1) A detector, a CNN architecture for detection of gestures; and 2) A classifier, a CNN for classification of the detected gestures. By using dynamic hand gesture recognition to interact with the system, the interactions can be increased with the help of multidimensional use of hand gestures as compared to other input methods. The dynamic hand gesture recognition method focuses to replace the mouse for interaction with the virtual objects. This work centralises the efforts of implementing a method that employs computer vision algorithms and gesture recognition techniques for developing a low-cost interface device for interacting with objects in the virtual environment such as screens using hand gestures.


Author(s):  
Kudirat Oyewumi Jimoh ◽  
Temilola Morufat Adepoju ◽  
Aladejobi A. Sobowale ◽  
Oluwatobi A. Ayilara

Aims: The study aimed to determine the specific features responsible for the recognition of gestures, to design a computational model for the process and to implement the model and evaluate its performance. Place and Duration of Study: Department of Computer Engineering, Federal Polytechnic, Ede, between August 2017 and February 2018. Methodology: Samples of hand gesture were collected from the deaf school. In total, 40 samples containing 4 gestures for each numeral were collected and processed. The collected samples were pre-processed and rescaled from 340 × 512 pixels to 256 × 256 pixels. The samples were examined for the specific characteristics responsible for the recognition of gestures using edge detection and histogram of the oriented gradient as feature extraction techniques. The model was implemented in MATLAB using Support Vector Machine (SVM) as its classifier. The performance of the system was evaluated using precision, recall and accuracy as metrics. Results: It was observed that the system showed a high classification rate for the considered hand gestures. For numerals 1, 3, 5 and 7, 100% accuracy were recorded, numerals 2 and 9 had 90% accuracy, numeral 4 had 85.67% accuracy, numeral 6 had 93.56%, numeral 8 had 88% while numeral 10 recorded 90.72% accuracy. An average recognition rate of 95% on tested data was recorded over a dataset of 40 hand gestures. Conclusion: The study has successfully classified hand gesture for Yorùbá Sign Language (YSL). Thus, confirming that YSL could be incorporated into the deaf educational system. The developed system will enhance the communication skills between hearing and hearing impaired people.  


2013 ◽  
Vol 09 (01) ◽  
pp. 1350007 ◽  
Author(s):  
SIDDHARTH S. RAUTARAY ◽  
ANUPAM AGRAWAL

With the increasing role of computing devices, facilitating natural human computer interaction (HCI) will have a positive impact on their usage and acceptance as a whole. For long time, research on HCI has been restricted to techniques based on the use of keyboard, mouse, etc. Recently, this paradigm has changed. Techniques such as vision, sound, speech recognition allow for much richer form of interaction between the user and machine. The emphasis is to provide a natural form of interface for interaction. Gestures are one of the natural forms of interaction between humans. As gesture commands are found to be natural for humans, the development of gesture control systems for controlling devices have become a popular research topic in recent years. Researchers have proposed different gesture recognition systems which act as an interface for controlling the applications. One of the drawbacks of present gesture recognition systems is application dependence which makes it difficult to transfer one gesture control interface into different applications. This paper focuses on designing a vision-based hand gesture recognition system which is adaptive to different applications thus making the gesture recognition systems to be application adaptive. The designed system comprises different processing steps like detection, segmentation, tracking, recognition, etc. For making the system as application-adaptive, different quantitative and qualitative parameters have been taken into consideration. The quantitative parameters include gesture recognition rate, features extracted and root mean square error of the system while the qualitative parameters include intuitiveness, accuracy, stress/comfort, computational efficiency, user's tolerance, and real-time performance related to the proposed system. These parameters have a vital impact on the performance of the proposed application adaptive hand gesture recognition system.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7914
Author(s):  
Ashutosh Mishra ◽  
Jinhyuk Kim ◽  
Jaekwang Cha ◽  
Dohyun Kim ◽  
Shiho Kim

An authorized traffic controller (ATC) has the highest priority for direct road traffic. In some irregular situations, the ATC supersedes other traffic control. Human drivers indigenously understand such situations and tend to follow the ATC; however, an autonomous vehicle (AV) can become confused in such circumstances. Therefore, autonomous driving (AD) crucially requires a human-level understanding of situation-aware traffic gesture recognition. In AVs, vision-based recognition is particularly desirable because of its suitability; however, such recognition systems have various bottlenecks, such as failing to recognize other humans on the road, identifying a variety of ATCs, and gloves in the hands of ATCs. We propose a situation-aware traffic control hand-gesture recognition system, which includes ATC detection and gesture recognition. Three-dimensional (3D) hand model-based gesture recognition is used to mitigate the problem associated with gloves. Our database contains separate training and test videos of approximately 60 min length, captured at a frame rate of 24 frames per second. It has 35,291 different frames that belong to traffic control hand gestures. Our approach correctly recognized traffic control hand gestures; therefore, the proposed system can be considered as an extension of the operational domain of the AV.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Shahzad Ahmed ◽  
Dingyang Wang ◽  
Junyoung Park ◽  
Sung Ho Cho

AbstractIn the past few decades, deep learning algorithms have become more prevalent for signal detection and classification. To design machine learning algorithms, however, an adequate dataset is required. Motivated by the existence of several open-source camera-based hand gesture datasets, this descriptor presents UWB-Gestures, the first public dataset of twelve dynamic hand gestures acquired with ultra-wideband (UWB) impulse radars. The dataset contains a total of 9,600 samples gathered from eight different human volunteers. UWB-Gestures eliminates the need to employ UWB radar hardware to train and test the algorithm. Additionally, the dataset can provide a competitive environment for the research community to compare the accuracy of different hand gesture recognition (HGR) algorithms, enabling the provision of reproducible research results in the field of HGR through UWB radars. Three radars were placed at three different locations to acquire the data, and the respective data were saved independently for flexibility.


2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.


2020 ◽  
Vol 2 (1) ◽  
pp. 60-73
Author(s):  
Rahmiy Kurniasary ◽  
Ismail Sukardi ◽  
Ahmad Syarifuddin

Hand gesture method including requires high memorization ability, some students are not active and focus in synchronizing the pronunciation of lafadz verses and doing hand gestures in learning to memorize and interpret the Qur'an. The purpose of this study was to determine the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX garade in Madrasah Aliyah Negeri 1 Prabumulih. The research method used is descriptive qualitative analysis that discusses the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX grade in Madrasah Aliyah Negeri 1 Prabumulih. The type of approach used descriptive qualitative with data collection techniques through observation, interviews, documentation and triangulation. Analysis of data qualitatively through three stages, namely data reduction, data presentation and conclusion stages. The results of research conducted by researchers are, first, the steps in the application of hand sign method by the teacher of Al-Qur'an Hadith in X.IPA3 includes teacher activities, namely the teacher explains the material and gives examples of verses to be memorized and interpreted using method of hand gestures on learning video shows on the projector. Student activities, namely students apply the method of hand gesture to the verse that has been taught. Second, supporting factors in the application of hand gesture methods in the form of internal factors, namely from the level of willingness and ability to memorize, external namely in terms of the use of media, teacher skills and a pleasant learning atmosphere. Third, the inhibiting factor in the application of the hand gesture method is the time required by each student, the level of student willingness, skills in making hand gestures and synchronization between the pronunciation of lafadz with hand movements.


2020 ◽  
pp. 1-15
Author(s):  
Anna Bishop ◽  
Erica A. Cartmill

Abstract Classic Maya (a.d. 250–900) art is filled with expressive figures in a variety of highly stylized poses and postures. These poses are so specific that they appear to be intentionally communicative, yet their meanings remain elusive. A few studies have scratched the surface of this issue, suggesting that a correlation exists between body language and social roles in Maya art. The present study examines whether one type of body language (hand gestures) in Classic Maya art represents and reflects elements of social structure. This analysis uses a coding approach derived from studies of hand gesture in conversation to apply an interactional approach to a static medium, thereby broadening the methods used to analyze gesture in ancient art. Statistics are used to evaluate patterns of gesture use in palace scenes across 289 figures on 94 different vases, with results indicating that the form and angling of gestures are related to social hierarchy. Furthermore, this study considers not just the individual status of each figure, but the interaction between figures. The results not only shed light on how gesture was depicted in Maya art, but also demonstrate how figural representation reflects social structure.


Sign in / Sign up

Export Citation Format

Share Document