dynamic gestures
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 33)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Abhishek Sharma ◽  
Shubham Sharma

Hand gesture is language through which normal people can communicate with deaf and dumb people. Hand gesture recognition detects the hand pose and converts it to the corresponding alphabet or sentence. In past years it received great attention from society because of its application. It uses machine learning algorithms. Hand gesture recognition is a great application of human computer interaction. An emerging research field that is based on human centered computing aims to understand human gestures and integrate users and their social context with computer systems. One of the unique and challenging applications in this framework is to collect information about human dynamic gestures. Keywords: Covid-19, SIRD model, Linear Regression, XGBoost, Random Forest Regression, SVR, LightGBM, Machine learning, Intervention.


Author(s):  
Priyanshi Gupta ◽  
Amita Goel ◽  
Nidhi Sengar ◽  
Vashudha Bahl

Hand gesture is language through which normal people can communicate with deaf and dumb people. Hand gesture recognition detects the hand pose and converts it to the corresponding alphabet or sentence. In past years it received great attention from society because of its application. It uses machine learning algorithms. Hand gesture recognition is a great application of human computer interaction. An emerging research field that is based on human centered computing aims to understand human gestures and integrate users and their social context with computer systems. One of the unique and challenging applications in this framework is to collect information about human dynamic gestures. Keywords: Tensor Flow, Machine learning, React js, handmark model, media pipeline


Author(s):  
Dmitry Ryumin ◽  
Ildar Kagirov ◽  
Alexander Axyonov ◽  
Alexey Karpov

Introduction: Currently, the recognition of gestures and sign languages is one of the most intensively developing areas in computer vision and applied linguistics. The results of current investigations are applied in a wide range of areas, from sign language translation to gesture-based interfaces. In that regard, various systems and methods for the analysis of gestural data are being developed. Purpose: A detailed review of methods and a comparative analysis of current approaches in automatic recognition of gestures and sign languages. Results: The main gesture recognition problems are the following: detection of articulators (mainly hands), pose estimation and segmentation of gestures in the flow of speech. The authors conclude that the use of two-stream convolutional and recurrent neural network architectures is generally promising for efficient extraction and processing of spatial and temporal features, thus solving the problem of dynamic gestures and coarticulations. This solution, however, heavily depends on the quality and availability of data sets. Practical relevance: This review can be considered a contribution to the study of rapidly developing sign language recognition, irrespective to particular natural sign languages. The results of the work can be used in the development of software systems for automatic gesture and sign language recognition.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xianmin Ma ◽  
Xiaofeng Li

The current dynamic gesture contour feature extraction method has the problems that the recognition rate of dynamic gesture contour feature and the recognition accuracy of dynamic gesture type are low, the recognition time is long, and comprehensive is poor. Therefore, we propose a dynamic gesture contour feature extraction method using residual network transfer learning. Sensors are used to integrate dynamic gesture information. The distance between the dynamic gesture and the acquisition device is detected by transfer learning, the dynamic gesture image is segmented, and the characteristic contour image is initialized. The residual network method is used to accurately identify the contour and texture features of dynamic gestures. Fusion processing weights are used to trace the contour features of dynamic gestures frame by frame, and the contour area of dynamic gestures is processed by gray and binarization to realize the extraction of contour features of dynamic gestures. The results show that the dynamic gesture contour feature recognition rate of the proposed method is 91%, the recognition time is 11.6 s, and the dynamic gesture type recognition accuracy rate is 92%. Therefore, this method can effectively improve the recognition rate and type recognition accuracy of dynamic gesture contour features and shorten the time for dynamic gesture contour feature recognition, and the F value is 0.92, with good comprehensive performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yuting Liu ◽  
Du Jiang ◽  
Haojie Duan ◽  
Ying Sun ◽  
Gongfa Li ◽  
...  

Gesture recognition is one of the important ways of human-computer interaction, which is mainly detected by visual technology. The temporal and spatial features are extracted by convolution of the video containing gesture. However, compared with the convolution calculation of a single image, multiframe image of dynamic gestures has more computation, more complex feature extraction, and more network parameters, which affects the recognition efficiency and real-time performance of the model. To solve above problems, a dynamic gesture recognition model based on CBAM-C3D is proposed. Key frame extraction technology, multimodal joint training, and network optimization with BN layer are used for making the network performance better. The experiments show that the recognition accuracy of the proposed 3D convolutional neural network combined with attention mechanism reaches 72.4% on EgoGesture dataset, which is improved greatly compared with the current main dynamic gesture recognition methods, and the effectiveness of the proposed algorithm is verified.


2021 ◽  
Author(s):  
Daniel Chen ◽  
Ming C. Leu ◽  
Zhaozheng Yin ◽  
Wenjin Tao

Author(s):  
Ajabe Harshada

Communication is the medium by which we can share our thoughts or convey the messages with other person. Nowadays we can give commands using voice recognition. But what if one absolutely cannot hear anything and eventually cannot speak. So the Sign Language is the main communicating tool for hearing impaired and mute people, and also to ensure an independent life for them, the automatic interpretation of sign language is an extensive research area. Sign language recognition (SLR) aims to interpret sign languages automatically by an application in order to help the deaf people to communicate with hearing society conveniently. Our aim is to design a system to help the Deaf and Dumb person to communicate with the rest of the world using sign language. With the use of image processing and artificial intelligence, many techniques and algorithms have been developed in this area. Every sign language recognition system is trained for recognizing the signs and converting them into required pattern. The proposed system aim to provide speech to speechless, in this paper we have introduced Sign Language Recognition using CNN for dynamic gestures to achieve faster results with high accuracy.


Author(s):  
K. Mary Sudha Rani

Gesture Based Interaction is the mathematical interpretation of human motion by a computing device. Contactless Gesture Based Interaction with devices aims to offer new possibilities to interact with machines thereby enabling development and design of far more natural and intuitive interactions with computing machines. The system makes use of static and dynamic gestures in order to perform operations on a system. This paper provides a detailed review on contactless systems which facilitates a better means of interaction between humans and machines.


Author(s):  
Iurii Krak ◽  
Ruslan Bahrii ◽  
Olexander Barmak

The article describes the information technology of alternative communication implemented by non-contact text entry using a limited number of simple dynamic gestures. Non-contact text entry technologies and motion tracking devices are analysed. A model of the human hand is proposed, which provides information on the position of the hand at each moment in time. Parameters sufficient for recognizing static and dynamic gestures are identified. The process of calculating the features of the various components of the movement that occur when showing dynamic hand gestures is considered. Common methods for selecting letters with non-contact text entry are analysed. To implement the user interaction interface, it is proposed to use a radial virtual keyboard with keys containing alphabetical letters grouped. A functional model and a model of human-computer interaction of non-contact text entry have been developed. It enabled to develop an easy-to-use software system for alternative communication, which is implemented by non-contact text entry using hand gestures. The developed software system provides a communication mechanism for people with disabilities.


Kinesic Humor ◽  
2021 ◽  
pp. 25-50
Author(s):  
Guillemette Bolens

John Milton plays with his readers’ embodied cognition. Reading Paradise Lost triggers complex perceptual simulations that are fascinatingly conflicting at the level of sensorimotricity. The carefully crafted effects thus elicited lead to a possible experience of humor. Kinesic incongruities in Paradise Lost are studied in this chapter to show how critics and expert readers respond to them, and to suggest that such effects are correlated with Milton’s investment in the notion of free will. The fact that Milton was able to create suspense in a plot known by all is addressed in relation to surprisingly dynamic gestures and the impact they may have on the ways in which readers conceive of the Fall of humankind.


Sign in / Sign up

Export Citation Format

Share Document