gestural input
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 3)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Zhitong Cui ◽  
Hebo Gong ◽  
Yanan Wang ◽  
Chengyi Shen ◽  
Wenyin Zou ◽  
...  


Cognition ◽  
2021 ◽  
Vol 211 ◽  
pp. 104608 ◽  
Author(s):  
Molly Flaherty ◽  
Dea Hunsicker ◽  
Susan Goldin-Meadow


Author(s):  
Sławomir Konrad Tadeja ◽  
Yupu Lu ◽  
Maciej Rydlewicz ◽  
Wojciech Rydlewicz ◽  
Tomasz Bubas ◽  
...  

AbstractPhotogrammetry is a promising set of methods for generating photorealistic 3D models of physical objects and structures. Such methods may rely solely on camera-captured photographs or include additional sensor data. Digital twins are digital replicas of physical objects and structures. Photogrammetry is an opportune approach for generating 3D models for the purpose of preparing digital twins. At a sufficiently high level of quality, digital twins provide effective archival representations of physical objects and structures and become effective substitutes for engineering inspections and surveying. While photogrammetric techniques are well-established, insights about effective methods for interacting with such models in virtual reality remain underexplored. We report the results of a qualitative engineering case study in which we asked six domain experts to carry out engineering measurement tasks in an immersive environment using bimanual gestural input coupled with gaze-tracking. The qualitative case study revealed that gaze-supported bimanual interaction of photogrammetric 3D models is a promising modality for domain experts. It allows the experts to efficiently manipulate and measure elements of the 3D model. To better allow designers to support this modality, we report design implications distilled from the feedback from the domain experts.



Author(s):  
Katherina A. Jurewicz ◽  
David M. Neyens

3D gestural input technology has the ability to expand human-computer interaction (HCI) beyond traditional input modalities. It is known that context and domain expertise are influential to gesture development, but there is little known about other individual factors such as workload and exposure. Therefore, the objective of this work is to explore the effects of workload and exposure on intuitive gesture choice and reaction time under a general HCI context. A longitudinal study was conducted to investigate the differences in intuitive mappings for high and low workload conditions as well as across three separate experimental sessions. There were no differences in the intuitive mappings for either workload conditions or different experimental sessions. However, there was a difference in reaction times between all experimental sessions indicating there was a learning effect from the first to the last session in that the participants became faster in generating intuitive mappings.



2020 ◽  
Vol 1 (2) ◽  
pp. 63-74
Author(s):  
Ma'rifah Nurmala

Children use gesture to refer to objects before they produce labels for these objects to convey semantic relations between objects before conveying sentences in speech. The gestural input that children receive from their  or teacher shows that they provide models for their children for the types of gestures and gesture to produce, and do so by modifying their gestures to meet the communicative needs of their children. This article aims to discuss what we know about the impact of gestures on memorization of words. This article describes an explanation the form and example why using gesture would help educator and parent in supports children’s language development. More importantly, the gestures that parents and teachers produce, in addition to providing models, help children learn labels for referents and semantic relations between these referents and even predict the extent of children’s vocabularies several years later. The existing research highlights the important role parental even the teacher gestures play in shaping children’s language learning.





Author(s):  
Te-Yen Wu ◽  
Shutong Qi ◽  
Junchi Chen ◽  
MuJie Shang ◽  
Jun Gong ◽  
...  
Keyword(s):  


Author(s):  
Patrik T. Schuler ◽  
Katherina A. Jurewicz ◽  
David M. Neyens

Gestures are a natural input method for human communication and may be effective for drivers to interact with in-vehicle infotainment systems (IVIS). Most of the existing work on gesture-based human-computer interaction (HCI) in and outside of the vehicle focus on the distinguishability of computer systems. The purpose of this study was to identify gesture sets that are used for IVIS tasks and to compare task times across the different functions for gesturing and touchscreens. Task times for user-defined gestures were quicker than for a novel touchscreen. There were several functions that resulted in relatively intuitive gesture mappings (e.g., zooming in and zooming out on a map) and others that did not have strong mappings across participants (e.g., decreasing volume and playing the next song). The findings of this study suggest that user-centric gestures can be utilized to interact with IVIS systems instead of touchscreens, and future work should evaluate how to account for variability in intuitive gestures. Understanding the gesture variability among the end users can support the development of an in-vehicle gestural input system that is intuitive for all users.



Author(s):  
Katherina A. Jurewicz ◽  
David M. Neyens ◽  
Ken Catchpole ◽  
Scott T. Reeves

Objective: The purpose of this research was to compare gesture-function mappings for experts and novices using a 3D, vision-based, gestural input system when exposed to the same context of anesthesia tasks in the operating room (OR). Background: 3D, vision-based, gestural input systems can serve as a natural way to interact with computers and are potentially useful in sterile environments (e.g., ORs) to limit the spread of bacteria. Anesthesia providers’ hands have been linked to bacterial transfer in the OR, but a gestural input system for anesthetic tasks has not been investigated. Methods: A repeated-measures study was conducted with two cohorts: anesthesia providers (i.e., experts) ( N = 16) and students (i.e., novices) ( N = 30). Participants chose gestures for 10 anesthetic functions across three blocks to determine intuitive gesture-function mappings. Reaction time was collected as a complementary measure for understanding the mappings. Results: The two gesture-function mapping sets showed some similarities and differences. The gesture mappings of the anesthesia providers showed a relationship to physical components in the anesthesia environment that were not seen in the students’ gestures. The students also exhibited evidence related to longer reaction times compared to the anesthesia providers. Conclusion: Domain expertise is influential when creating gesture-function mappings. However, both experts and novices should be able to use a gesture system intuitively, so development methods need to be refined for considering the needs of different user groups. Application: The development of a touchless interface for perioperative anesthesia may reduce bacterial contamination and eventually offer a reduced risk of infection to patients.



Sign in / Sign up

Export Citation Format

Share Document