hand action
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 17)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Rameez Shamalik ◽  
Sanjay Koli

Gestures are universal means of communication without any language barrier. Detecting gestures and recognition of its meaning are key steps for researchers in computer vision. Majority of the work is done in sign language already. Sign language datasets are compared with respect to their usability and diversity in terms of various signs. This paper highlights the available datasets from three dimensional body scans to hand action gestures. Their usability and strategies used to achieve the desired results are also discussed. Major neural networks are evaluated in terms of varied parameters and feutures. A Methodology for effective gesture recognition in real is proposed. Lastly Results achieved through an Open CV in combination with Sci-kit learn library based technique for gesture recognition are presented and analyzed in terms of efficacy and efficiency.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jarkko Keränen

Abstract Iconic strategies—methods of making iconic forms—have been mostly considered in terms of concrete semantic fields such as actions and objects. In this article, I investigate iconic strategies in lexical sensory signs—signs that semantically relate to the five senses (sight, touch, smell, sound, and taste) and to emotions (e.g., anger)—in Finnish Sign Language. The iconic strategy types I discuss are hand-action, entity, drawing, and locating. I also discuss the indexical strategy type (e.g., finger pointing). To gain as rich and broad a view as possible, the mixed methods in the research consist of three components: intuition based, intersubjective, and statistical analyses. The main findings are (1) that, in order from most preferred to least preferred strategy, the hand-action, the entity, the indexical, and the drawing were found in lexical sensory signs; the locating strategy was not found at all, and (2) that the interpretation of iconic strategies is not always unambiguous and absolute. In conclusion, I reflect on methodological issues, and suggest that the concept of cross-modal iconicity and indexicality should be further studied in sign language linguistics.


2021 ◽  
Author(s):  
Mariagrazia Ranzini ◽  
Carlo Semenza ◽  
Marco Zorzi ◽  
Simone Cutini

Embodied and grounded cognition theories suggest that cognitive processes are built upon sensorimotor systems. In the context of studies on numerical cognition, interactions between number processing and the hand actions of reaching and grasping have been documented in skilled adults, thereby supporting embodied and grounded cognition accounts. The present study made use of the neurophysiological principle of neural adaptation applied to repetitive hand actions to test the hypothesis of a functional overlap between neurocognitive mechanisms of hand action and number processing. Participants performed repetitive grasping of an object, repetitive pointing, repetitive tapping, or passive viewing. Subsequently, they performed a symbolic number comparison task. Importantly, hand action and number comparison were functionally and temporally dissociated, thereby minimizing context-based effects. Results showed that executing the action of pointing slowed down the responses in number comparison. Moreover, the typical distance effect (faster responses for numbers far from the reference as compared to close ones) was not observed for small numbers after pointing, while it was enhanced by grasping. These findings confirm the functional link between hand action and number processing, and suggest new hypotheses on the role of pointing as a meaningful gesture in the development and embodiment of numerical skills.


2021 ◽  
pp. 1-15
Author(s):  
S. Rubin Bose ◽  
V. Sathiesh Kumar

The real-time perception of hand gestures in a deprived environment is a demanding machine vision task. The hand recognition operations are more strenuous with different illumination conditions and varying backgrounds. Robust recognition and classification are the vital steps to support effective human-machine interaction (HMI), virtual reality, etc. In this paper, the real-time hand action recognition is performed by using an optimized Deep Residual Network model. It incorporates a RetinaNet model for hand detection and a Depthwise Separable Convolutional (DSC) layer for precise hand gesture recognition. The proposed model overcomes the class imbalance problems encountered by the conventional single-stage hand action recognition algorithms. The integrated DSC layer reduces the computational parameters and enhances the recognition speed. The model utilizes a ResNet-101 CNN architecture as a Feature extractor. The model is trained and evaluated on the MITI-HD dataset and compared with the benchmark datasets (NUSHP-II, Senz-3D). The network achieved a higher Precision and Recall value for an IoU value of 0.5. It is realized that the RetinaNet-DSC model using ResNet-101 backbone network obtained higher Precision (99.21 %for AP0.5, 96.80%for AP0.75) for MITI-HD Dataset. Higher performance metrics are obtained for a value of γ= 2 and α= 0.25. The SGD with a momentum optimizer outperformed the other optimizers (Adam, RMSprop) for the datasets considered in the studies. The prediction time of the optimized deep residual network is 82 ms.


2021 ◽  
Author(s):  
Viet-Duc Le ◽  
Van-Nam Hoang ◽  
Tien-Thanh Nguyen ◽  
Van-Hung Le ◽  
Thanh-Hai Tran ◽  
...  

2021 ◽  
Vol 15 (3) ◽  
pp. 177-186
Author(s):  
Radovan Gregor ◽  
Andrej Babinec ◽  
František Duchoň ◽  
Michal Dobiš

Abstract The research behind this paper arose out of a need to use an open-source system that enables hand guiding of the robot effector using a force sensor. The paper deals with some existing solutions, including the solution based on the open-source framework Robot Operating System (ROS), in which the built-in motion planner MoveIt is used. The proposed concept of a hand-guiding system utilizes the output of the force–torque sensor mounted at the robot effector to obtain the desired motion, which is thereafter used for planning consequential motion trajectories. Some advantages and disadvantages of the built-in planner are discussed, and then the custom motion planning solution is proposed to overcome the identified drawbacks. Our planning algorithm uses polynomial interpolation and is suitable for continuous replanning of the consequential motion trajectories, which is necessary because the output from the sensor changes due to the hand action during robot motion. The resulting system is verified using a virtual robot in the ROS environment, which acts on the real Optoforce force–torque sensor HEX-70-CE-2000N. Furthermore, the workspace and the motion of the robot are restricted to a greater extent to achieve more realistic simulation.


Author(s):  
Kristína Czekóová ◽  
Daniel Joel Shaw ◽  
Martin Lamoš ◽  
Beáta Špiláková ◽  
Miguel Salazar ◽  
...  

AbstractDuring social interactions, humans tend to imitate one another involuntarily. To investigate the neurocognitive mechanisms driving this tendency, researchers often employ stimulus-response compatibility (SRC) tasks to assess the influence that action observation has on action execution. This is referred to as automatic imitation (AI). The stimuli used frequently in SRC procedures to elicit AI often confound action-related with other nonsocial influences on behaviour; however, in response to the rotated hand-action stimuli employed increasingly, AI partly reflects unspecific up-right/down-left biases in stimulus-response mapping. Despite an emerging awareness of this confounding orthogonal spatial-compatibility effect, psychological and neuroscientific research into social behaviour continues to employ these stimuli to investigate AI. To increase recognition of this methodological issue, the present study measured the systematic influence of orthogonal spatial effects on behavioural and neurophysiological measures of AI acquired with rotated hand-action stimuli in SRC tasks. In Experiment 1, behavioural data from a large sample revealed that complex orthogonal spatial effects exert an influence on AI over and above any topographical similarity between observed and executed actions. Experiment 2 reproduced this finding in a more systematic, within-subject design, and high-density electroencephalography revealed that electrocortical expressions of AI elicited also are modulated by orthogonal spatial compatibility. Finally, source localisations identified a collection of cortical areas sensitive to this spatial confound, including nodes of the multiple-demand and semantic-control networks. These results indicate that AI measured on SRC procedures with the rotated hand stimuli used commonly might reflect neurocognitive mechanisms associated with spatial associations rather than imitative tendencies.


Author(s):  
Alberto Sabater ◽  
Inigo Alonso ◽  
Luis Montesano ◽  
Ana Cristina Murillo

Author(s):  
Rui Li ◽  
Hongyu Wang ◽  
Zhenyu Liu ◽  
Na Cheng ◽  
Hongye Xie

Sign in / Sign up

Export Citation Format

Share Document