Device Motion via Head Tracking for Mobile Interaction

Author(s):  
Sean T. Hayes ◽  
Julie A. Adams

Linear changes in position are difficult to measure using only a mobile device’s onboard sensors. Prior research has relied on external sensors or known environmental references in order to develop mobile phone interaction techniques. The Amazon Fire Phone’s® unique head-tracking capabilities were leveraged and evaluated for navigating large application spaces using device motion gestures. Although touch interaction is shown to outperform the device-motion, this research demonstrates the feasibility of using effective device-motion gestures that rely on changes in the device’s position and orientation. Design guidance for future device motion interaction capabilities is provided.

Author(s):  
Andrew Dekker ◽  
Justin Marrington ◽  
Stephen Viller

Unlike traditional forms of Human-Computer Interaction (such as conducting desktop or Web-based design), mobile design has by its nature little control over the contextual variables of its research. Short-term evaluations of novel mobile interaction techniques are abundant, but these controlled studies only address limited contexts through artificial deployments, which cannot hope to reveal the patterns of use that arise as people appropriate a tool and take it with them into the varying social and physical contexts of their lives. The authors propose a rapid and reflective model of in-situ deployment of high-fidelity prototypes, borrowing the tested habits of industry, where researchers relinquish tight control over their prototypes in exchange for an opportunity to observe patterns of use that would be intractable to plan for in controlled studies. The approach moves the emphasis in prototyping away from evaluation and towards exploration and reflection, promoting an iterative prototyping methodology that captures the complexities of the real world.


2018 ◽  
pp. 1084-1094
Author(s):  
Stefan Schneegass ◽  
Thomas Olsson ◽  
Sven Mayer ◽  
Kristof van Laerhoven

Wearable computing has a huge potential to shape the way we interact with mobile devices in the future. Interaction with mobile devices is still mainly limited to visual output and tactile finger-based input. Despite the visions of next-generation mobile interaction, the hand-held form factor hinders new interaction techniques becoming commonplace. In contrast, wearable devices and sensors are intended for more continuous and close-to-body use. This makes it possible to design novel wearable-augmented mobile interaction methods – both explicit and implicit. For example, the EEG signal from a wearable breast strap could be used to identify user status and change the device state accordingly (implicit) and the optical tracking with a head-mounted camera could be used to recognize gestural input (explicit). In this paper, the authors outline the design space for how the existing and envisioned wearable devices and sensors could augment mobile interaction techniques. Based on designs and discussions in a recently organized workshop on the topic as well as other related work, the authors present an overview of this design space and highlight some use cases that underline the potential therein.


Author(s):  
Sean T. Hayes ◽  
Julie A. Adams

Smartphones pose new design challenges for precise interactions, prompting the development of indirect interaction techniques that improve performance by reducing the occlusion caused by touch input. Direct touch interaction (e.g., tap to select) is imprecise, due to occlusion and the finger’s surface area. Many cursor-based interaction techniques address this issue; however, these techniques do not dynamically adjust the control-to-display movement ratio ( CDratio ) to improve accuracy and interaction times. This paper analyzes the performance benefits of applying adaptive CDratio enhancements to smartphone interaction for target-selection tasks. Existing desktop computer enhancements and a new enhancement method, Magnetic Targets, are compared. Magnetic Targets resulted in significantly shorter target selection times compared to the existing enhancements. Further, a simple method that combined enhancements to provide a CDratio based on a greater context of the interactions demonstrated performance improvements.


2019 ◽  
Author(s):  
Walter Vanzella ◽  
Natalia Grion ◽  
Daniele Bertolini ◽  
Andrea Perissinotto ◽  
Davide Zoccolan

AbstractTracking head’s position and orientation of small mammals is crucial in many behavioral neurophysiology studies. Yet, full reconstruction of the head’s pose in 3D is a challenging problem that typically requires implanting custom headsets made of multiple LEDs or inertial units. These assemblies need to be powered in order to operate, thus preventing wireless experiments, and, while suitable to study navigation in large arenas, their application is unpractical in the narrow operant boxes employed in perceptual studies. Here we propose an alternative approach, based on passively imaging a 3D-printed structure, painted with a pattern of black dots over a white background. We show that this method is highly precise and accurate and we demonstrate that, given its minimal weight and encumbrance, it can be used to study how rodents sample sensory stimuli during a perceptual discrimination task and how hippocampal place cells represent head position over extremely small spatial scales.


Author(s):  
Rafael Ballagas ◽  
Michael Rohs ◽  
Jennifer G. Sheridan ◽  
Jan Borchers

The mobile phone is the first truly pervasive computer. In addition to its core communications functionality, it is increasingly used for interaction with the physical world. This chapter examines the design space of input techniques using established desktop taxonomies and design spaces to provide an indepth discussion of existing interaction techniques. A new five-part spatial classification is proposed for ubiquitous mobile phone interaction tasks discussed in our survey. It includes supported subtasks (position, orient, and selection), dimensionality, relative vs. absolute movement, interaction style (direct vs. indirect), and feedback from the environment (continuous vs. discrete). Key design considerations are identified for deploying these interaction techniques in real-world applications. Our analysis aims to inspire and inform the design of future smart phone interaction techniques.


Author(s):  
Anders Henrysson ◽  
Mark Ollila ◽  
Mark Billinghurst

Mobile phones are evolving into the ideal platform for Augmented Reality (AR). In this chapter we describe how augmented reality applications can be developed for mobile phones and the interaction metaphors that are ideally suited for this platform. Several sample applications are described which explore different interaction techniques. User study results show that moving the phone to interact with virtual content is an intuitive way to select and position virtual objects. A collaborative AR game is also presented with an evaluation study. Users preferred playing with the collaborative AR interface than with a non-AR interface and also found physical phone motion to be a very natural input method. This results discussed in this chapter should assist researchers in developing their own mobile phone based AR applications.


Sign in / Sign up

Export Citation Format

Share Document