Learning and Tracking Ad Hoc Fiducial Markers in Spatial Augmented Reality

Author(s):  
Emma Gould ◽  
Stephen Guerin ◽  
Cody Smith ◽  
Steve Smith ◽  
Brian Bush ◽  
...  

We describe a spatial augmented reality system with a tangible user interface used to control computer simulations of complex systems. In spatial augmented reality, the user’s physical space is augmented with projected imagery, blending real objects with projected information, and a tangible user interface enables users to manipulate physical objects as controllers for interactive visualizations. Our system learns ad hoc objects in the user’s environment as fiducial markers (i.e., objects that are visually recognized and tracked). When combined with simulation and visualization tools, these interfaces allow the user to control simulations or ensembles of simulations via physical objects using apt metaphors. While other research has leveraged the use of depth cameras, our system enables the use of standard cameras in readily available smartphones and webcams and has an implementation that runs completely in JavaScript in the web browser. We discuss the prerequisite object-recognition requirements for such tangible user interfaces and describe computer-vision and machine-learning algorithms meeting those requirements. We conclude by presenting example applications, which are also available online.

Author(s):  
Sandra Cano ◽  
Victor Peñeñory ◽  
César A. Collazos ◽  
Sergio Albiol

A Tangible User Interface (TUI) is a new interaction option that uses nontraditional input and output elements. A tangible interface thus allows the manipulation of physical objects using digital information. The exploration and manipulation of physical objects is a factor to be considered in learning in children, especially those with some kind of disability such as hearing, who maximize the use of other senses such as vision and touch. In a tangible interface, three elements are related - physical, digital and social. The potential of IoT for children is growing. This technology IoT integrated with TUI, can help for that parents or teachers can monitoring activities of the child. Also to identify behavior patterns in the child with hearing impairment. This article shows four case studies, where had been designed different products of Internet of Things Tangible applied a several contexts and with products of low cost.


Author(s):  
Thomas Ludwig ◽  
Oliver Stickel ◽  
Peter Tolmie ◽  
Malte Sellmer

Abstract10 years ago, Castellani et al. (Journal of Computer Supported Cooperative Work, vol. 18, no. 2–3, 2009, pp. 199–227, 2009) showed that using just an audio channel for remote troubleshooting can lead to a range of problems and already envisioned a future in which augmented reality (AR) could solve many of these issues. In the meantime, AR technologies have found their way into our everyday lives and using such technologies to support remote collaboration has been widely studied within the fields of Human-Computer Interaction and Computer-Supported Cooperative Work. In this paper, we contribute to this body of research by reporting on an extensive empirical study within a Fab Lab of troubleshooting and expertise sharing and the potential relevance of articulation work to their realization. Based on the findings of this study, we derived design challenges that led to an AR-based concept, implemented as a HoloLens application, called shARe-it. This application is designed to support remote troubleshooting and expertise sharing through different communication channels and AR-based interaction modalities. Early testing of the application revealed that novel interaction modalities such as AR-based markers and drawings play only a minor role in remote collaboration due to various limiting factors. Instead, the transmission of a shared view and especially arriving at a shared understanding of the situation as a prerequisite for articulation work continue to be the decisive factors in remote troubleshooting.


Author(s):  
Tim Bosch ◽  
Gu van Rhijn ◽  
Frank Krause ◽  
Reinier Könemann ◽  
Ellen S. Wilschut ◽  
...  

Author(s):  
Leonardo Tanzi ◽  
Pietro Piazzolla ◽  
Francesco Porpiglia ◽  
Enrico Vezzetti

Abstract Purpose The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient’s organ with its 2D endoscopic image, to assist surgeons during the procedure. Methods This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team’s specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. Results U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model’s anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model’s rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). Conclusion This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.


Sign in / Sign up

Export Citation Format

Share Document