gestural interaction
Recently Published Documents


TOTAL DOCUMENTS

138
(FIVE YEARS 26)

H-INDEX

16
(FIVE YEARS 2)

Author(s):  
Pablo Torres-Carrion ◽  
Carina González-González ◽  
César Bernal Bravo ◽  
Alfonso Infante-Moro

AbstractPeople with Down syndrome present cognitive difficulties that affect their reading skills. In this study, we present results about using gestural interaction with the Kinect sensor to improve the reading skills of students with Down syndrome. We found improvements in the visual association, visual comprehension, sequential memory, and visual integration after this stimulation in the experimental group compared to the control group. We also found that the number of errors and delay time in the interaction decreased between sessions in the experimental group.


2021 ◽  
Author(s):  
Weihan Huang ◽  
Stephanie Bourgeois ◽  
Yun Suen Pai ◽  
Kouta Minamizawa

Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1926
Author(s):  
Yiqi Xiao ◽  
Ke Miao ◽  
Chenhan Jiang

A stroke is the basic limb movement that both humans and animals naturally and repetitiously perform. Having been introduced into gestural interaction, mid-air stroke gestures saw a wide application range and quite intuitive use. In this paper, we present an approach for building command-to-gesture mapping that exploits the semantic association between interactive commands and the directions of mid-air unistroke gestures. Directional unistroke gestures make use of the symmetry of the semantics of commands, which makes a more systematic gesture set for users’ cognition and reduces the number of gestures users need to learn. However, the learnability of the directional unistroke gestures is varying with different commands. Through a user elicitation study, a gesture set containing eight directional mid-air unistroke gestures was selected by subjective ratings of the direction in respect to its association degree with the corresponding command. We evaluated this gesture set in a following study to investigate the learnability issue, and the directional mid-air unistroke gestures and user-preferred freehand gestures were compared. Our findings can offer preliminary evidence that “return”, “save”, “turn-off” and “mute” are the interaction commands more applicable to using directional mid-air unistrokes, which may have implication for the design of mid-air gestures in human–computer interaction.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3997
Author(s):  
Omri Alon ◽  
Sharon Rabinovich ◽  
Chana Fyodorov ◽  
Jessica Cauchard

We are witnessing a rise in the use of ground and aerial robots in first response missions. These robots provide novel opportunities to support first responders and lower the risk to people’s lives. As these robots become increasingly autonomous, researchers are seeking ways to enable natural communication strategies between robots and first responders, such as using gestural interaction. First response work often takes place in harsh environments, which hold unique challenges for gesture sensing and recognition, including in low-visibility environments, making the gestural interaction non-trivial. As such, an adequate choice of sensors and algorithms needs to be made to support gestural recognition in harsh environments. In this work, we compare the performances of three common types of remote sensors, namely RGB, depth, and thermal cameras, using various algorithms, in simulated harsh environments. Our results show 90 to 96% recognition accuracy (respectively with or without smoke) with the use of protective equipment. This work provides future researchers with clear data points to support them in their choice of sensors and algorithms for gestural interaction with robots in harsh environments.


2021 ◽  
pp. 1-18
Author(s):  
Rúbia Eliza de Oliveira Schultz Ascari ◽  
Luciano Silva ◽  
Roberto Pereira

BACKGROUND: The use of computers as a communication tool by people with disabilities can serve as an alternative effective to promote social interactions and the more inclusive and active participation of people in society. OBJECTIVE: This paper presents a systematic mapping of the literature that provides a survey of scientific contributions where Computer Vision is applied to enable users with motor and speech impairments to access computers easily, allowing them to exert their communicative abilities. METHODS: The mapping was conducted employing searches that identified 221 potentially eligible scientific articles published between 2009 and 2019, indexed by ACM, IEEE, Science Direct, and Springer databases. RESULTS: From the retrieved papers, 33 were selected and categorized into themes of this research interest: Human-Computer Interaction, Human-Machine Interaction, Human-Robot Interaction, Recreation, and surveys. Most of the chosen studies use sets of predefined gestures, low-cost cameras, and tracking a specific body region for gestural interaction. CONCLUSION: The results offer an overview of the Computer Vision techniques used in applied research on Assistive Technology for people with motor and speech disabilities, pointing out opportunities and challenges in this research domain.


2021 ◽  
pp. 451-455
Author(s):  
Salvatore Andolina ◽  
Paolo Ariano ◽  
Davide Brunetti ◽  
Nicolò Celadon ◽  
Guido Coppo ◽  
...  

2020 ◽  
Vol 144 ◽  
pp. 102497
Author(s):  
Vito Gentile ◽  
Mohamed Khamis ◽  
Fabrizio Milazzo ◽  
Salvatore Sorce ◽  
Alessio Malizia ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document