scholarly journals Simulation of Motion Parallax for Monitor-Based Augmented Reality Applications

Author(s):  
Rafael Radkowski ◽  
James Oliver

The paper presents a method for the simulation of motion parallax for monitor-based Augmented Reality (AR) applications. Motion parallax effects the relative movement between far and close objects: near objects appear moving faster than far objects do. This facilitates the perception of depth, distances, and the structure of geometrically complex objects. Today, industrial AR applications are equipped with monitor-based output devices, e.g., for design reviews. Thus this important depth cue is omitted because all objects appear as one even layer on screen. As a result, the assessment of complex structures becomes more difficult. The method presented in this paper utilizes depth images to create layered images: multiple images in which objects in a video image are split up with respect to their distance too a video camera. Using head tracking, the single layers are relatively moved with respect to the user’s head position. This simulates motion parallax. Virtual objects superimpose the final image to complete the AR scene. The method was prototypically realized, the results show its feasibility.

Author(s):  
J.R. McIntosh ◽  
D.L. Stemple ◽  
William Bishop ◽  
G.W. Hannaway

EM specimens often contain 3-dimensional information that is lost during micrography on a single photographic film. Two images of one specimen at appropriate orientations give a stereo view, but complex structures composed of multiple objects of graded density that superimpose in each projection are often difficult to decipher in stereo. Several analytical methods for 3-D reconstruction from multiple images of a serially tilted specimen are available, but they are all time-consuming and computationally intense.


Nanophotonics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 3003-3010
Author(s):  
Jiacheng Shi ◽  
Wen Qiao ◽  
Jianyu Hua ◽  
Ruibin Li ◽  
Linsen Chen

AbstractGlasses-free augmented reality is of great interest by fusing virtual 3D images naturally with physical world without the aid of any wearable equipment. Here we propose a large-scale spatial multiplexing holographic see-through combiner for full-color 3D display. The pixelated metagratings with varied orientation and spatial frequency discretely reconstruct the propagating lightfield. The irradiance pattern of each view is tailored to form super Gaussian distribution with minimized crosstalk. What’s more, spatial multiplexing holographic combiner with customized aperture size is adopted for the white balance of virtually displayed full-color 3D scene. In a 32-inch prototype, 16 views form a smooth parallax with a viewing angle of 47°. A high transmission (>75%) over the entire visible spectrum range is achieved. We demonstrated that the displayed virtual 3D scene not only preserved natural motion parallax, but also mixed well with the natural objects. The potential applications of this study include education, communication, product design, advertisement, and head-up display.


1997 ◽  
Vol 6 (4) ◽  
pp. 413-432 ◽  
Author(s):  
Richard L. Holloway

Augmented reality (AR) systems typically use see-through head-mounted displays (STHMDs) to superimpose images of computer-generated objects onto the user's view of the real environment in order to augment it with additional information. The main failing of current AR systems is that the virtual objects displayed in the STHMD appear in the wrong position relative to the real environment. This registration error has many causes: system delay, tracker error, calibration error, optical distortion, and misalignment of the model, to name only a few. Although some work has been done in the area of system calibration and error correction, very little work has been done on characterizing the nature and sensitivity of the errors that cause misregistration in AR systems. This paper presents the main results of an end-to-end error analysis of an optical STHMD-based tool for surgery planning. The analysis was done with a mathematical model of the system and the main results were checked by taking measurements on a real system under controlled circumstances. The model makes it possible to analyze the sensitivity of the system-registration error to errors in each part of the system. The major results of the analysis are: (1) Even for moderate head velocities, system delay causes more registration error than all other sources combined; (2) eye tracking is probably not necessary; (3) tracker error is a significant problem both in head tracking and in system calibration; (4) the World (or reference) coordinate system adds error and should be omitted when possible; (5) computational correction of optical distortion may introduce more delay-induced registration error than the distortion error it corrects, and (6) there are many small error sources that will make submillimeter registration almost impossible in an optical STHMD system without feedback. Although this model was developed for optical STHMDs for surgical planning, many of the results apply to other HMDs as well.


Robotica ◽  
1985 ◽  
Vol 3 (1) ◽  
pp. 7-11 ◽  
Author(s):  
Ernest W. Kent ◽  
Thomas Wheatley ◽  
Marilyn Nashman

SUMMARYWhen applied to rapidly moving objects with complex trajectories, the information-rate limitation imposed by video-camera frame rates impairs the effectiveness of structured-light techniques in real-time robot servoing. To improve the performance of such systems, the use of fast infra-red proximity detectors to augment visual guidance in the final phase of target acquisition was explored. It was found that this approach was limited by the necessity of employing a different range/intensity calibration curve for the proximity detectors for every object and for every angle of approach to complex objects. Consideration of the physics of the detector process suggested that a single log-linear parametric family could describe all such calibration curves, and this was confirmed by experiment. From this result, a technique was devised for cooperative interaction between modalities, in which the vision sense provided on-the-fly determination of calibration parameters for the proximity detectors, for every approach to a target, before passing control of the system to the other modality. This technique provided a three hundred percent increase in useful manipulator velocity, and improved performance during the transition of control from one modality to the other.


2013 ◽  
Vol 756-759 ◽  
pp. 2014-2018
Author(s):  
Lei Kai ◽  
Ning Rui ◽  
Wen Min Wang ◽  
Qiang Ma

This paper presents the design and development of a Mobile Augmented Reality Map (MARM) which shows map information on the real world video rather than a plane. The proposed system uses wireless Geographic Information System (GIS), and video camera and gyroscope of a smart phone. MARM has the advantage of GIS and the convenience of mobile phones, and is an extremely intuitive way to use map.


2019 ◽  
Author(s):  
Walter Vanzella ◽  
Natalia Grion ◽  
Daniele Bertolini ◽  
Andrea Perissinotto ◽  
Davide Zoccolan

AbstractTracking head’s position and orientation of small mammals is crucial in many behavioral neurophysiology studies. Yet, full reconstruction of the head’s pose in 3D is a challenging problem that typically requires implanting custom headsets made of multiple LEDs or inertial units. These assemblies need to be powered in order to operate, thus preventing wireless experiments, and, while suitable to study navigation in large arenas, their application is unpractical in the narrow operant boxes employed in perceptual studies. Here we propose an alternative approach, based on passively imaging a 3D-printed structure, painted with a pattern of black dots over a white background. We show that this method is highly precise and accurate and we demonstrate that, given its minimal weight and encumbrance, it can be used to study how rodents sample sensory stimuli during a perceptual discrimination task and how hippocampal place cells represent head position over extremely small spatial scales.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chengjun Chen ◽  
Zhongke Tian ◽  
Dongnian Li ◽  
Lieyong Pang ◽  
Tiannuo Wang ◽  
...  

Purpose This study aims to monitor and guide the assembly process. The operators need to change the assembly process according to the products’ specifications during manual assembly of mass customized production. Traditional information inquiry and display methods, such as manual lookup of assembly drawings or electronic manuals, are inefficient and error-prone. Design/methodology/approach This paper proposes a projection-based augmented reality system (PBARS) for assembly guidance and monitoring. The system includes a projection method based on viewpoint tracking, in which the position of the operator’s head is tracked and the projection images are changed correspondingly. The assembly monitoring phase applies a method for parts recognition. First, the pixel local binary pattern (PX-LBP) operator is achieved by merging the classical LBP operator with the pixel classification process. Afterward, the PX-LBP features of the depth images are extracted and the randomized decision forests classifier is used to get the pixel classification prediction image (PCPI). Parts recognition and assembly monitoring is performed by PCPI analysis. Findings The projection image changes with the viewpoint of the human body, hence the operators always perceive the three-dimensional guiding scene from different viewpoints, improving the human-computer interaction. Part recognition and assembly monitoring were achieved by comparing the PCPIs, in which missing and erroneous assembly can be detected online. Originality/value This paper designed the PBARS to monitor and guide the assembly process simultaneously, with potential applications in mass customized production. The parts recognition and assembly monitoring based on pixels classification provides a novel method for assembly monitoring.


Sign in / Sign up

Export Citation Format

Share Document