Classification of Beyond-Reality Interaction Techniques in Spatial Human-Computer Interaction

Author(s):  
Bastian Dewitz ◽  
Philipp Ladwig ◽  
Frank Steinicke ◽  
Christian Geiger
Author(s):  
Carl Smith

The contribution of this research is to argue that truly creative patterns for interaction within cultural heritage contexts must create situations and concepts that could not have been realised without the intervention of those interaction patterns. New forms of human-computer interaction and therefore new tools for navigation must be designed that unite the strengths, features, and possibilities of both the physical and the virtual space. The human-computer interaction techniques and mixed reality methodologies formulated during this research are intended to enhance spatial cognition while implicitly improving pattern recognition. This research reports on the current state of location-based technology including Mobile Augmented Reality (MAR) and GPS. The focus is on its application for use within cultural heritage as an educational and outreach tool. The key questions and areas to be investigated include: What are the requirements for effective digital intervention within the cultural heritage sector? What are the affordances of mixed and augmented reality? What mobile technology is currently being utilised to explore cultural heritage? What are the key projects? Finally, through a series of case studies designed and implemented by the author, some broad design guidelines are outlined. The chapter concludes with an overview of the main issues to consider when (re)engineering cultural heritage contexts.


2019 ◽  
Vol 17 (4) ◽  
pp. 357-381
Author(s):  
Emek Erdolu

This article serves to the larger quest for increasing our capacities as designers, researchers, and scholars in understanding and developing human-computer interaction in computer-aided design. The central question is on how to ground the related research work in input technologies and interaction techniques for computer-aided design applications, which primarily focus on technology and implementation, within the actual territories of computer-aided design processes. To discuss that, the article first reviews a collection of research studies and projects that present input technologies and interaction techniques developed as alternative or complimentary to the mouse as used in computer-aided design applications. Based on the mode of interaction, these studies and projects are traced in four categories: hand-mediated systems that involve gesture- and touch-based techniques, multimodal systems that combine various ways of interaction including speech-based techniques, experimental systems such as brain-computer interaction and emotive-based techniques, and explorations in virtual reality- and augmented reality-based systems. The article then critically examines the limitations of these alternative systems related to the ways they have been envisioned, designed, and situated in studies as well as of the two existing research bases in human-computer interaction in which these studies could potentially be grounded and improved. The substance of examination is what is conceptualized as “frameworks of thought”—on variables and interrelations as elements of consideration within these efforts. Building upon the existing frameworks of thought, the final part discusses an alternative as a vehicle for incorporating layers of the material cultures of computer-aided design in designing, analyzing, and evaluating computer-aided design-geared input technologies and interaction techniques. The alternative framework offers the potential to help generate richer questions, considerations, and avenues of investigation.


Photonics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 90 ◽  
Author(s):  
Bosworth ◽  
Russell ◽  
Jacob

Over the past decade, the Human–Computer Interaction (HCI) Lab at Tufts University has been developing real-time, implicit Brain–Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS). This paper reviews the work of the lab; we explore how we have used fNIRS to develop BCIs that are based on a variety of human states, including cognitive workload, multitasking, musical learning applications, and preference detection. Our work indicates that fNIRS is a robust tool for the classification of brain-states in real-time, which can provide programmers with useful information to develop interfaces that are more intuitive and beneficial for the user than are currently possible given today’s human-input (e.g., mouse and keyboard).


1988 ◽  
Vol 32 (5) ◽  
pp. 284-287 ◽  
Author(s):  
Sharon L. Greene ◽  
John D. Gould ◽  
Stephen J. Boies ◽  
Antonia Meluson ◽  
Marwan Rasamny

Five different human-computer interaction techniques were studied to determine the relative advantages of entry-based and selection-based methods. Gould, Boies, Meluson, Rasamny, and Vosburgh (1988), found that entry techniques aided by either automatic or requested string completion, were superior to various selection-based techniques. This study examines unaided as well as aided entry techniques, and compares them to selection-based methods. Variations in spelling difficulty and database size were studied for their effect on user performance and preferences. The main results were that automatic string completion was the fastest method and selection techniques were better than unaided entry techniques, especially for hard-to-spell words. This was particularly true for computer-inexperienced participants. The database size had its main influence on performance with the selection techniques. In the selection and aided-entry methods there was a strong correlation between the observed keystroke times and the minimum number of keystrokes required by a task.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5963
Author(s):  
Agata Kołakowska ◽  
Agnieszka Landowska

This paper deals with analysis of behavioural patterns in human–computer interaction. In the study, keystroke dynamics were analysed while participants were writing positive and negative opinions. A semi-experiment with 50 participants was performed. The participants were asked to recall the most negative and positive learning experiences (subject and teacher) and write an opinion about it. Keystroke dynamics were captured and over 50 diverse features were calculated and checked against the ability to differentiate positive and negative opinions. Moreover, classification of opinions was performed providing accuracy slightly above the random guess level. The second classification approach used self-report labels of pleasure and arousal and showed more accurate results. The study confirmed that it was possible to recognize positive and negative opinions from the keystroke patterns with accuracy above the random guess; however, combination with other modalities might produce more accurate results.


Author(s):  
Robert J. K. Jacob

The problem of human-computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrow-bandwidth, highly constrained interface (Tufte, 1989). To address it, we seek faster, more natural, and more convenient means for users and computers to exchange information. The user’s side is constrained by the nature of human communication organs and abilities; the computer’s is constrained only by input/output devices and interaction techniques that we can invent. Current technology has been stronger in the computer-to-user direction than the user-to-computer, hence today’s user-computer dialogues are rather one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. Using eye movements as a user-to-computer communication medium can help redress this imbalance. This chapter describes the relevant characteristics of the human eye, eye-tracking technology, how to design interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and the relationship between eye-movement interfaces and virtual environments. As with other areas of research and design in human-computer interaction, it is helpful to build on the equipment and skills humans have acquired through evolution and experience and search for ways to apply them to communicating with a computer. Direct manipulation interfaces have enjoyed great success largely because they draw on analogies to existing human skills (pointing, grabbing, moving objects in space), rather than trained behaviors. Similarly, we try to make use of natural eye movements in designing interaction techniques for the eye. Because eye movements are so different from conventional computer inputs, our overall approach in designing interaction techniques is, wherever possible, to obtain information from a user’s natural eye movements while viewing the screen, rather than requiring the user to make specific trained eye movements to actuate the system. This requires careful attention to issues of human design, as will any successful work in virtual environments. The goal is for human-computer interaction to start with studies of the characteristics of human communication channels and skills and then develop devices, interaction techniques, and interfaces that communicate effectively to and from those channels.


Sign in / Sign up

Export Citation Format

Share Document