Real Time Augmented Reality Tracking Registration Based on Motion Blur Template Matching Image Construction Model

Author(s):  
Lei Tian ◽  
Jin Zhou
Author(s):  
A. Audi ◽  
M. Pierrot-Deseilligny ◽  
C. Meynard ◽  
C. Thom

In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. <br><br> Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the <i>N<sup>th</sup></i> image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn’t seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.


2015 ◽  
Vol 6 (2) ◽  
Author(s):  
Rujianto Eko Saputro ◽  
Dhanar Intan Surya Saputra
Keyword(s):  

Media pembelajaran ternyata selalu mengikuti perkembangan teknologi yangada, mulai dari teknologi cetak, audio visual, komputer sampai teknologi gabunganantara teknologi cetak dengan komputer. Saat ini media pembelajaran hasil gabunganteknologi cetak dan komputer dapat diwujudkan dengan media teknologi AugmentedReality (AR). Augmented Reality (AR) adalah teknologi yang digunakan untukmerealisasikan dunia virtual ke dalam dunia nyata secara real-time. Organ pencernaanmanusia terdiri atas Mulut, Kerongkongan atau esofagus, Lambung, Usus halus, danUsus besar. Media pembelajaran mengenal organ pencernaan manusia pada saat inisangat monoton, yaitu melalui gambar, buku atau bahkan alat proyeksi lainnya.Menggunakan Augmented Reality yang mampu merealisasikan dunia virtual ke dunianyata, dapat mengubah objek-objek tersebut menjadi objek 3D, sehingga metodepembelajaran tidaklah monoton dan anak-anak jadi terpacu untuk mengetahuinya lebihlanjut, seperti mengetahui nama organ dan keterangan dari masing-masing organtersebut.


2018 ◽  
Author(s):  
Kyle Plunkett

This manuscript provides two demonstrations of how Augmented Reality (AR), which is the projection of virtual information onto a real-world object, can be applied in the classroom and in the laboratory. Using only a smart phone and the free HP Reveal app, content rich AR notecards were prepared. The physical notecards are based on Organic Chemistry I reactions and show only a reagent and substrate. Upon interacting with the HP Reveal app, an AR video projection shows the product of the reaction as well as a real-time, hand-drawn curved-arrow mechanism of how the product is formed. Thirty AR notecards based on common Organic Chemistry I reactions and mechanisms are provided in the Supporting Information and are available for widespread use. In addition, the HP Reveal app was used to create AR video projections onto laboratory instrumentation so that a virtual expert can guide the user during the equipment setup and operation.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Svenja Ipsen ◽  
Sven Böttger ◽  
Holger Schwegmann ◽  
Floris Ernst

AbstractUltrasound (US) imaging, in contrast to other image guidance techniques, offers the distinct advantage of providing volumetric image data in real-time (4D) without using ionizing radiation. The goal of this study was to perform the first quantitative comparison of three different 4D US systems with fast matrix array probes and real-time data streaming regarding their target tracking accuracy and system latency. Sinusoidal motion of varying amplitudes and frequencies was used to simulate breathing motion with a robotic arm and a static US phantom. US volumes and robot positions were acquired online and stored for retrospective analysis. A template matching approach was used for target localization in the US data. Target motion measured in US was compared to the reference trajectory performed by the robot to determine localization accuracy and system latency. Using the robotic setup, all investigated 4D US systems could detect a moving target with sub-millimeter accuracy. However, especially high system latency increased tracking errors substantially and should be compensated with prediction algorithms for respiratory motion compensation.


Sign in / Sign up

Export Citation Format

Share Document