Low-latency head-tracking for Augmented Reality

Author(s):  
Sayed Muchallil ◽  
Konrad Halnum ◽  
Carsten Griwodz
1997 ◽  
Vol 6 (4) ◽  
pp. 413-432 ◽  
Author(s):  
Richard L. Holloway

Augmented reality (AR) systems typically use see-through head-mounted displays (STHMDs) to superimpose images of computer-generated objects onto the user's view of the real environment in order to augment it with additional information. The main failing of current AR systems is that the virtual objects displayed in the STHMD appear in the wrong position relative to the real environment. This registration error has many causes: system delay, tracker error, calibration error, optical distortion, and misalignment of the model, to name only a few. Although some work has been done in the area of system calibration and error correction, very little work has been done on characterizing the nature and sensitivity of the errors that cause misregistration in AR systems. This paper presents the main results of an end-to-end error analysis of an optical STHMD-based tool for surgery planning. The analysis was done with a mathematical model of the system and the main results were checked by taking measurements on a real system under controlled circumstances. The model makes it possible to analyze the sensitivity of the system-registration error to errors in each part of the system. The major results of the analysis are: (1) Even for moderate head velocities, system delay causes more registration error than all other sources combined; (2) eye tracking is probably not necessary; (3) tracker error is a significant problem both in head tracking and in system calibration; (4) the World (or reference) coordinate system adds error and should be omitted when possible; (5) computational correction of optical distortion may introduce more delay-induced registration error than the distortion error it corrects, and (6) there are many small error sources that will make submillimeter registration almost impossible in an optical STHMD system without feedback. Although this model was developed for optical STHMDs for surgical planning, many of the results apply to other HMDs as well.


2002 ◽  
Vol 11 (2) ◽  
pp. 158-175 ◽  
Author(s):  
Steve Mann ◽  
James Fung

Diminished reality is as important as augmented reality, and both are possible with a device called the Reality Mediator. Over the past two decades, we have designed, built, worn, and tested many different embodiments of this device in the context of wearable computing. Incorporated into the Reality Mediator is an “EyeTap” system, which is a device that quantifies and resynthesizes light that would otherwise pass through one or both lenses of the eye(s) of a wearer. The functional principles of EyeTap devices are discussed, in detail. The EyeTap diverts into a spatial measurement system at least a portion of light that would otherwise pass through the center of projection of at least one lens of an eye of a wearer. The Reality Mediator has at least one mode of operation in which it reconstructs these rays of light, under the control of a wearable computer system. The computer system then uses new results in algebraic projective geometry and comparametric equations to perform head tracking, as well as to track motion of rigid planar patches present in the scene. We describe how our tracking algorithm allows an EyeTap to alter the light from a particular portion of the scene to give rise to a computer-controlled, selectively mediated reality. An important difference between mediated reality and augmented reality includes the ability to not just augment but also deliberately diminish or otherwise alter the visual perception of reality. For example, diminished reality allows additional information to be inserted without causing the user to experience information overload. Our tracking algorithm also takes into account the effects of automatic gain control, by performing motion estimation in both spatial as well as tonal motion coordinates.


Author(s):  
Rafael Radkowski ◽  
James Oliver

The paper presents a method for the simulation of motion parallax for monitor-based Augmented Reality (AR) applications. Motion parallax effects the relative movement between far and close objects: near objects appear moving faster than far objects do. This facilitates the perception of depth, distances, and the structure of geometrically complex objects. Today, industrial AR applications are equipped with monitor-based output devices, e.g., for design reviews. Thus this important depth cue is omitted because all objects appear as one even layer on screen. As a result, the assessment of complex structures becomes more difficult. The method presented in this paper utilizes depth images to create layered images: multiple images in which objects in a video image are split up with respect to their distance too a video camera. Using head tracking, the single layers are relatively moved with respect to the user’s head position. This simulates motion parallax. Virtual objects superimpose the final image to complete the AR scene. The method was prototypically realized, the results show its feasibility.


2020 ◽  
Vol 2 (1) ◽  
pp. 52
Author(s):  
Aida Vidal-Balea ◽  
Oscar Blanco-Novoa ◽  
Imanol Picallo-Guembe ◽  
Mikel Celaya-Echarri ◽  
Paula Fraga-Lamas ◽  
...  

In recent years, the education sector has incorporated the use of new technologies and computing devices into classrooms, which allowed for implementing new ways for enhancing teaching and learning. One of these new technologies is augmented reality (AR), which enables creating experiences that mix reality and virtual elements in an attractive and visual way, thus helping teachers to foster student interest in learning certain subjects and abstract concepts in novel visual ways. This paper proposes to harness the potential of the latest AR devices in order to enable giving AR-enabled lectures and hands-on labs. Specifically, it proposes an architecture for providing low-latency AR education services in a classroom or a laboratory. Such a low latency is achieved thanks to the use of edge computing devices, which offload the cloud from the traditional tasks that are required by dynamic AR applications (e.g., near real-time data processing, communications among AR devices). Depending on the specific AR application and the number of users, the wireless link (usually WiFi) could be overloaded if the network has not been properly designed, and the overall performance of the application can be compromised, leading to high latency and even to wireless communication failure. In order to tackle this issue, radio channel measurements and simulation results have been performed by means of an in-house developed 3D ray-launching tool, which is able to model and simulate the behaviour of an AR-enabled classroom/laboratory in terms of radio propagation and quality of service. To corroborate the obtained theoretical results, a Microsoft HoloLens 2 teaching application was devised and tested, thus demonstrating the feasibility of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document