Interacting with Mobile and Pervasive Computer Systems

Author(s):  
Vassilis Kostakos ◽  
Eamonn O’Neill

In this chapter, we present existing and ongoing research within the Human-Computer Interaction group at the University of Bath into the development of novel interaction techniques. With our research, we aim to improve the way in which users interact with mobile and pervasive systems. More specifically, we present work in three broad categories of interaction: stroke interaction, kinaesthetic interaction, and text entry. Finally, we describe some of our currently ongoing work as well as planned future work.

Author(s):  
Patrik T. Schuler ◽  
Katherina A. Jurewicz ◽  
David M. Neyens

Gestures are a natural input method for human communication and may be effective for drivers to interact with in-vehicle infotainment systems (IVIS). Most of the existing work on gesture-based human-computer interaction (HCI) in and outside of the vehicle focus on the distinguishability of computer systems. The purpose of this study was to identify gesture sets that are used for IVIS tasks and to compare task times across the different functions for gesturing and touchscreens. Task times for user-defined gestures were quicker than for a novel touchscreen. There were several functions that resulted in relatively intuitive gesture mappings (e.g., zooming in and zooming out on a map) and others that did not have strong mappings across participants (e.g., decreasing volume and playing the next song). The findings of this study suggest that user-centric gestures can be utilized to interact with IVIS systems instead of touchscreens, and future work should evaluate how to account for variability in intuitive gestures. Understanding the gesture variability among the end users can support the development of an in-vehicle gestural input system that is intuitive for all users.


interactions ◽  
2013 ◽  
Vol 20 (5) ◽  
pp. 50-57 ◽  
Author(s):  
Ben Shneiderman ◽  
Kent Norman ◽  
Catherine Plaisant ◽  
Benjamin B. Bederson ◽  
Allison Druin ◽  
...  

2021 ◽  
Vol 28 (6) ◽  
pp. 1-12
Author(s):  
Audrey Desjardins ◽  
Oscar Tomico ◽  
Andrés Lucero ◽  
Marta E. Cecchinato ◽  
Carman Neustaedter

In this introduction to the special issue on First-Person Methods in (Human-Computer Interaction) HCI, we present a brief overview of first-person methods, their origin, and their use in Human-Computer Interaction. We also detail the difference between first-person methods, second-person, and third-person methods, as a way to guide the reader when engaging the special issue articles. We articulate our motivation for putting together this special issue: we wanted a collection of works that would allow HCI researchers to develop further, define, and outline practices, techniques and implications of first-person methods. We trace links between the articles in this special issue and conclude with questions and directions for future work in this methodological space: working with boundaries, risk, and accountability.


Author(s):  
Carl Smith

The contribution of this research is to argue that truly creative patterns for interaction within cultural heritage contexts must create situations and concepts that could not have been realised without the intervention of those interaction patterns. New forms of human-computer interaction and therefore new tools for navigation must be designed that unite the strengths, features, and possibilities of both the physical and the virtual space. The human-computer interaction techniques and mixed reality methodologies formulated during this research are intended to enhance spatial cognition while implicitly improving pattern recognition. This research reports on the current state of location-based technology including Mobile Augmented Reality (MAR) and GPS. The focus is on its application for use within cultural heritage as an educational and outreach tool. The key questions and areas to be investigated include: What are the requirements for effective digital intervention within the cultural heritage sector? What are the affordances of mixed and augmented reality? What mobile technology is currently being utilised to explore cultural heritage? What are the key projects? Finally, through a series of case studies designed and implemented by the author, some broad design guidelines are outlined. The chapter concludes with an overview of the main issues to consider when (re)engineering cultural heritage contexts.


2019 ◽  
Vol 17 (4) ◽  
pp. 357-381
Author(s):  
Emek Erdolu

This article serves to the larger quest for increasing our capacities as designers, researchers, and scholars in understanding and developing human-computer interaction in computer-aided design. The central question is on how to ground the related research work in input technologies and interaction techniques for computer-aided design applications, which primarily focus on technology and implementation, within the actual territories of computer-aided design processes. To discuss that, the article first reviews a collection of research studies and projects that present input technologies and interaction techniques developed as alternative or complimentary to the mouse as used in computer-aided design applications. Based on the mode of interaction, these studies and projects are traced in four categories: hand-mediated systems that involve gesture- and touch-based techniques, multimodal systems that combine various ways of interaction including speech-based techniques, experimental systems such as brain-computer interaction and emotive-based techniques, and explorations in virtual reality- and augmented reality-based systems. The article then critically examines the limitations of these alternative systems related to the ways they have been envisioned, designed, and situated in studies as well as of the two existing research bases in human-computer interaction in which these studies could potentially be grounded and improved. The substance of examination is what is conceptualized as “frameworks of thought”—on variables and interrelations as elements of consideration within these efforts. Building upon the existing frameworks of thought, the final part discusses an alternative as a vehicle for incorporating layers of the material cultures of computer-aided design in designing, analyzing, and evaluating computer-aided design-geared input technologies and interaction techniques. The alternative framework offers the potential to help generate richer questions, considerations, and avenues of investigation.


2015 ◽  
Vol 75 (3) ◽  
Author(s):  
Erman Hamid ◽  
Azizah Jaafar ◽  
Ang Mei Choo

There has been a number of researches carried out on Human-Computer Interaction (HCI) impact to home networking. Many researchers have stated that the HCI elements are the most important aspects to be considered in making user understand some of issues concerning the Home Network. This paper reviews the existing research related to Human-Computer Interaction, Home Network and Network Management. This paper seeks to identify the effectiveness of existing Network Management Tools and the importance of HCI in dealing with it. In addition, this paper looks into the potential future work that could be done in order to archive desirable goals of Home Network.


1988 ◽  
Vol 32 (5) ◽  
pp. 284-287 ◽  
Author(s):  
Sharon L. Greene ◽  
John D. Gould ◽  
Stephen J. Boies ◽  
Antonia Meluson ◽  
Marwan Rasamny

Five different human-computer interaction techniques were studied to determine the relative advantages of entry-based and selection-based methods. Gould, Boies, Meluson, Rasamny, and Vosburgh (1988), found that entry techniques aided by either automatic or requested string completion, were superior to various selection-based techniques. This study examines unaided as well as aided entry techniques, and compares them to selection-based methods. Variations in spelling difficulty and database size were studied for their effect on user performance and preferences. The main results were that automatic string completion was the fastest method and selection techniques were better than unaided entry techniques, especially for hard-to-spell words. This was particularly true for computer-inexperienced participants. The database size had its main influence on performance with the selection techniques. In the selection and aided-entry methods there was a strong correlation between the observed keystroke times and the minimum number of keystrokes required by a task.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2308 ◽  
Author(s):  
Dilana Hazer-Rau ◽  
Sascha Meudt ◽  
Andreas Daucher ◽  
Jennifer Spohrs ◽  
Holger Hoffmann ◽  
...  

In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the University of Ulm Multimodal Affective Corpus (uulmMAC), consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 × video, 3 × audio, and 7 × biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final uulmMAC dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our uulmMAC database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.


Sign in / Sign up

Export Citation Format

Share Document