Emerging Human-Toy Interaction Techniques with Augmented and Mixed Reality

Author(s):  
Jeff K. T. Tang ◽  
Jordan Tewell
Author(s):  
Carl Smith

The contribution of this research is to argue that truly creative patterns for interaction within cultural heritage contexts must create situations and concepts that could not have been realised without the intervention of those interaction patterns. New forms of human-computer interaction and therefore new tools for navigation must be designed that unite the strengths, features, and possibilities of both the physical and the virtual space. The human-computer interaction techniques and mixed reality methodologies formulated during this research are intended to enhance spatial cognition while implicitly improving pattern recognition. This research reports on the current state of location-based technology including Mobile Augmented Reality (MAR) and GPS. The focus is on its application for use within cultural heritage as an educational and outreach tool. The key questions and areas to be investigated include: What are the requirements for effective digital intervention within the cultural heritage sector? What are the affordances of mixed and augmented reality? What mobile technology is currently being utilised to explore cultural heritage? What are the key projects? Finally, through a series of case studies designed and implemented by the author, some broad design guidelines are outlined. The chapter concludes with an overview of the main issues to consider when (re)engineering cultural heritage contexts.


Author(s):  
Adrian David Cheok

In this chapter, we explore the applications of mixed reality technology for future social and physical entertainment systems. Throughout the case studies that will be presented here, we will show the very broad and significant impacts of mixed reality technology on variety aspects of human interactivity with regards to entertainment. On the technological aspect, the various systems we would be touching on incorporated different technologies ranging from the current mainstream ones such as GPS tracking, Bluetooth, RFID to pioneering researches of vision based tracking, augmented reality, tangible interaction techniques and 3D live mixed reality capture system. We will discuss each projects in detail in terms of their motivations and requirements of the particular application domain, their system description and design decisions, as well as their future impacts on the human social and physical entertainment field.


2020 ◽  
Vol 14 (4) ◽  
pp. 373-385 ◽  
Author(s):  
Theophilus Teo ◽  
Mitchell Norman ◽  
Gun A. Lee ◽  
Mark Billinghurst ◽  
Matt Adcock

2010 ◽  
Vol 19 (2) ◽  
pp. 118-130
Author(s):  
Pablo Figueroa

This paper describes some details about the design of InTml, the interaction techniques markup language. We explain three main elements in its architecture: a simple mixed reality (MR) based component model, a communication model between components that allows fusion of multimodal information at a fine level of granularity, and an indirection mechanism for dataflows that is useful to keep state inside a dataflow. We also briefly discuss the advantages we have found in the use of formal methods, model driven development, and encapsulation mechanisms. The purpose of this description is to make explicit the design rationale of these mechanisms, which may be fruitful for other developments in our field.


2014 ◽  
pp. 1489-1499
Author(s):  
Carl Smith

The contribution of this research is to argue that truly creative patterns for interaction within cultural heritage contexts must create situations and concepts that could not have been realised without the intervention of those interaction patterns. New forms of human-computer interaction and therefore new tools for navigation must be designed that unite the strengths, features, and possibilities of both the physical and the virtual space. The human-computer interaction techniques and mixed reality methodologies formulated during this research are intended to enhance spatial cognition while implicitly improving pattern recognition. This research reports on the current state of location-based technology including Mobile Augmented Reality (MAR) and GPS. The focus is on its application for use within cultural heritage as an educational and outreach tool. The key questions and areas to be investigated include: What are the requirements for effective digital intervention within the cultural heritage sector? What are the affordances of mixed and augmented reality? What mobile technology is currently being utilised to explore cultural heritage? What are the key projects? Finally, through a series of case studies designed and implemented by the author, some broad design guidelines are outlined. The chapter concludes with an overview of the main issues to consider when (re)engineering cultural heritage contexts.


Author(s):  
A. Devaux ◽  
C. Hoarau ◽  
M. Brédif ◽  
S. Christophe

<p><strong>Abstract.</strong> In this paper, we assume that augmented reality (AR) and mixed reality (MR) are relevant contexts for 3D urban geovisualization, especially in order to support the design of the urban spaces. We propose to design an in situ MR application, that could be helpful for urban designers, providing tools to interactively remove or replace buildings in situ. This use case requires advances regarding existing geovisualization methods. We highlight the need to adapt and extend existing 3D geovisualization pipelines, in order to adjust the specific requirements for AR/MR applications, in particular for data rendering and interaction. In order to reach this goal, we focus on and implement four elementary in situ and ex situ AR/MR experiments: each type of these AR/MR experiments helps to consider and specify a specific subproblem, i.e. scale modification, pose estimation, matching between scene and urban project realism, and the mix of real and virtual elements through portals, while proposing occlusion handling, rendering and interaction techniques to solve them.</p>


Author(s):  
Jacqueline A. Towson ◽  
Matthew S. Taylor ◽  
Diana L. Abarca ◽  
Claire Donehower Paul ◽  
Faith Ezekiel-Wilder

Purpose Communication between allied health professionals, teachers, and family members is a critical skill when addressing and providing for the individual needs of patients. Graduate students in speech-language pathology programs often have limited opportunities to practice these skills prior to or during externship placements. The purpose of this study was to research a mixed reality simulator as a viable option for speech-language pathology graduate students to practice interprofessional communication (IPC) skills delivering diagnostic information to different stakeholders compared to traditional role-play scenarios. Method Eighty graduate students ( N = 80) completing their third semester in one speech-language pathology program were randomly assigned to one of four conditions: mixed-reality simulation with and without coaching or role play with and without coaching. Data were collected on students' self-efficacy, IPC skills pre- and postintervention, and perceptions of the intervention. Results The students in the two coaching groups scored significantly higher than the students in the noncoaching groups on observed IPC skills. There were no significant differences in students' self-efficacy. Students' responses on social validity measures showed both interventions, including coaching, were acceptable and feasible. Conclusions Findings indicated that coaching paired with either mixed-reality simulation or role play are viable methods to target improvement of IPC skills for graduate students in speech-language pathology. These findings are particularly relevant given the recent approval for students to obtain clinical hours in simulated environments.


Sign in / Sign up

Export Citation Format

Share Document