Computer Interaction
Recently Published Documents





2021 ◽  
Vol 28 (5) ◽  
pp. 1-29
Amanda Lazar ◽  
Ben Jelen ◽  
Alisha Pradhan ◽  
Katie A. Siek

Researchers in Human–Computer Interaction (HCI) have long developed technologies for older adults. Recently, researchers are engaging in critical reflections of these approaches. IoT for aging in place is one area around which these conflicting discourses have converged, likely in part driven by government and industry interest. This article introduces diffractive analysis as an approach that examines difference to yield new empirical understandings about our methods and the topics we study. We constructed three analyses of a dataset collected at an IoT design workshop and then conducted a diffractive analysis. We present themes from this analysis regarding the ways that participants are inscribed in our research, considerations related to transferability and novelty between work centered on older adults and other work, and insights about methodologies. Our discussion contributes implications for researchers to form teams and account for their roles in research, as well as recommendations how diffractive analysis can support other research agendas.

2021 ◽  
Vol 96 ◽  
pp. 107475
Aldosary Saad ◽  
Abdallah A. Mohamed

2021 ◽  
pp. 1-32
Simone Dornelas Costa ◽  
Monalessa Perini Barcellos ◽  
Ricardo de Almeida Falbo

Human–Computer Interaction (HCI) is a multidisciplinary area that involves a diverse body of knowledge and a complex landscape of concepts, which can lead to semantic problems, hampering communication and knowledge transfer. Ontologies have been successfully used to solve semantics and knowledge-related problems in several domains. This paper presents a systematic literature review that investigated the use of ontologies in the HCI domain. The main goal was to find out how HCI ontologies have been used and developed. 35 ontologies were identified. As a result, we noticed that they cover different HCI aspects, such as user interface, interaction phenomenon, pervasive computing, user modeling / profile, HCI design, interaction experience and adaptive interactive system. Although there are overlaps, we did not identify reuse among the 35 analyzed ontologies. The ontologies have been used mainly to support knowledge representation and reasoning. Although ontologies have been used in HCI for more than 25 years, their use became more frequent in the last decade, when ontologies address a higher number of HCI aspects and are represented as both conceptual and computational models. Concerning how ontologies have been developed, we noticed that some good practices of ontology engineering have not been followed. Considering that the quality of an ontology directly influences the quality of the solution built based on it, we believe that there is an opportunity for HCI and ontology engineering professionals to get closer to build better and more effective ontologies, as well as ontology-based solutions.

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Xiangkun Li ◽  
Guoqing Sun ◽  
Yifei Li

With the development of science and technology, the introduction of virtual reality technology has pushed the development of human-computer interaction technology to a new height. The combination of virtual reality and human-computer interaction technology has been applied more and more in military simulation, medical rehabilitation, game creation, and other fields. Action is the basis of human behavior. Among them, human behavior and action analysis is an important research direction. In human behavior and action, recognition research based on behavior and action has the characteristics of convenience, intuition, strong interaction, rich expression information, and so on. It has become the first choice of many researchers for human behavior analysis. However, human motion and motion pictures are complex objects with many ambiguous factors, which are difficult to express and process. Traditional motion recognition is usually based on two-dimensional color images, while two-dimensional RGB images are vulnerable to background disturbance, light, environment, and other factors that interfere with human target detection. In recent years, more and more researchers have begun to use fuzzy mathematics theory to identify human behaviors. The plantar pressure data under different motion modes were collected through experiments, and the current gait information was analyzed. The key gait events including toe-off and heel touch were identified by dynamic baseline monitoring. For the error monitoring of key gait events, the screen window is used to filter the repeated recognition events in a certain period of time, which greatly improves the recognition accuracy and provides important gait information for motion pattern recognition. The similarity matching is performed on each template, the correct rate of motion feature extraction is 90.2%, and the correct rate of motion pattern recognition is 96.3%, which verifies the feasibility and effectiveness of human motion recognition based on fuzzy theory. It is hoped to provide processing techniques and application examples for artificial intelligence recognition applications.

Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1926
Yiqi Xiao ◽  
Ke Miao ◽  
Chenhan Jiang

A stroke is the basic limb movement that both humans and animals naturally and repetitiously perform. Having been introduced into gestural interaction, mid-air stroke gestures saw a wide application range and quite intuitive use. In this paper, we present an approach for building command-to-gesture mapping that exploits the semantic association between interactive commands and the directions of mid-air unistroke gestures. Directional unistroke gestures make use of the symmetry of the semantics of commands, which makes a more systematic gesture set for users’ cognition and reduces the number of gestures users need to learn. However, the learnability of the directional unistroke gestures is varying with different commands. Through a user elicitation study, a gesture set containing eight directional mid-air unistroke gestures was selected by subjective ratings of the direction in respect to its association degree with the corresponding command. We evaluated this gesture set in a following study to investigate the learnability issue, and the directional mid-air unistroke gestures and user-preferred freehand gestures were compared. Our findings can offer preliminary evidence that “return”, “save”, “turn-off” and “mute” are the interaction commands more applicable to using directional mid-air unistrokes, which may have implication for the design of mid-air gestures in human–computer interaction.

2021 ◽  
Vinícius Paes de Camargo ◽  
Renato Balancieri ◽  
Heloise Manica Paris Teixeira ◽  
Guilherme Corredato Guerino

2021 ◽  
Vol 5 (CHI PLAY) ◽  
pp. 1-2
Kathrin Gerling ◽  
Elisa Mekler ◽  
Regan L. Mandryk

Since its inaugural edition in 2014, the ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play (CHI PLAY) has grown to become the premier ACM SIGCHI venue for playercomputer interaction, bringing together researchers and professionals across all areas of play, games, and human-computer interaction. This year, CHI PLAY has moved its publications to a journal-based model, and we are pleased to present the first issue of the Proceedings of the ACM on Human-Computer Interaction that contains full paper contributions from the CHI PLAY community. This issue has 64 papers that were accepted in the 2021 cycle of the CHI PLAY conference. Over two rounds, a total of 250 papers were submitted for review and our acceptance rate is 25.6%. Thework published in this volume represents the contributions from the 2021 program committee, including external reviewers, associate chairs, and editors. Together, we have engaged in a revised reviewing process that saw several major changes. First, we moved to a revise and resubmit process to address existing inequities in submission and review, improve the quality of the review process, and increase the reach of our community's research. Second, we made major changes to our review form to improve the review process, while also easing the burden of review, along with explicitlywelcoming different contribution types and managing the complexities of interdisciplinary evaluation. We would like to acknowledge the efforts that our community has made in adapting to this new process, ensuring rigorous review during a global pandemic, and working together with the submitting authors to achieve high-quality scholarship. In this issue, the majority of contributions are empirical in nature, with fifteen papers classified by the authors as using qualitative methods, fifteen using quantitative methods, and nine using mixed methods. We also publish seven papers presenting design artefacts and three presenting technical artefacts. Finally, we include four papers employing meta-research methods, two papers that present new methodological approaches, and nine papers that contribute to the development and validation of theory.

Hailiang Wang ◽  
Da Tao ◽  
Jian Cai ◽  
Xingda Qu

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Kang Wang

With the development of science and technology, human-computer interaction technology has also been more applied. This article aims to use the Internet of Things technology to apply human-computer interaction technology to smart car products to improve the realism and immersion of the user's human-computer interaction experience. This paper deeply studies the concept of networking technology and frame composition and analyzes the intelligent vehicle product development strengths and weaknesses. Then, from the perspective of human-computer interaction design, the training response and training learning situation of human-computer interaction are proposed, and the human-computer interaction system of intelligent vehicle-mounted products based on the Internet of Things is constructed. The user experience is improved from this perspective, and the breadth of applications is increased. This article first analyzes and predicts the market size of smart car products and then analyzes the scene elements of car products. When designing car products, the driver's control range should be fully considered. Finally, the user's human-computer interaction experience analysis for smart car products is analyzed. In the execution of navigation and telephone tasks, there is no significant difference in user satisfaction with tasks and the P values are all less than 0.05.

2021 ◽  
Eric James McDermott ◽  
Thimm Zwiener ◽  
Ulf Ziemann ◽  
Christoph Zrenner

The search for optimized forms of human-computer interaction (HCI) has intensified alongside the growing potential for the combination of biosignals with virtual reality (VR) and augmented reality (AR) to enable the next generation of personal computing. At the core, this requires decoding the user's biosignals into digital commands. Electromyography (EMG) is a biosensor of particular interest due to the ease of data collection, the relatively high signal-to-noise-ratio, its non-invasiveness, and the ability to interpret the signal as being generated by (intentional) muscle activity. Here, we investigate the potential of using data taken from a simple 2-channel EMG setup to differentiate 5 distinct movements. In particular, EMG was recorded from two bipolar sensors over small hand muscles (extensor digitorum, flexor digitorum profundus) while a subject performed 50 trials of dorsal extension and return for each of the five digits. The maximum and the mean data values across the trial were determined for each channel and used as features. A k-nearest neighbors (kNN) classification was performed and overall 5-class classification accuracy reached 94% when using the full trial's time window, while simulated real-time classification reached 90.4% accuracy when using the constructed kNN model (k=3) with a 280ms sliding window. Additionally, unsupervised learning was performed and a homogeneity of 85% was achieved. This study demonstrates that reliable decoding of different natural movements is possible with fewer than one channel per class, even without taking into account temporal features of the signal. The technical feasibility of this approach in a real-time setting was validated by sending real-time EMG data to a custom Unity3D VR application through a Lab Streaming Layer to control a user interface. Further use-cases of gamification and rehabilitation were also examined alongside integration of eye-tracking and gesture recognition for a sensor fusion approach to HCI and user intent.

Sign in / Sign up

Export Citation Format

Share Document