scholarly journals Real-Time Augmented Reality on 3-D Mobile Display using Stereo Camera Tracking

2013 ◽  
Vol 18 (3) ◽  
pp. 362-371
Author(s):  
Jungsik Park ◽  
Byung-Kuk Seo ◽  
Jong-Il Park
2011 ◽  
Vol 16 (4) ◽  
pp. 614-623
Author(s):  
Ju-Hyun Oh ◽  
Kwang-Hoon Sohn

2018 ◽  
Vol 7 (12) ◽  
pp. 479 ◽  
Author(s):  
Piotr Siekański ◽  
Jakub Michoński ◽  
Eryk Bunsch ◽  
Robert Sitnik

Camera pose tracking is a fundamental task in Augmented Reality (AR) applications. In this paper, we present CATCHA, a method to achieve camera pose tracking in cultural heritage interiors with rigorous conservatory policies. Our solution is real-time model-based camera tracking according to textured point cloud, regardless of its registration technique. We achieve this solution using orthographic model rendering that allows us to achieve real-time performance, regardless of point cloud density. Our developed algorithm is used to create a novel tool to help both cultural heritage restorers and individual visitors visually compare the actual state of a culture heritage location with its previously scanned state from the same point of view in real time. The provided application can directly achieve a frame rate of over 15 Hz on VGA frames on a mobile device and over 40 Hz using remote processing. The performance of our approach is evaluated using a model of the King’s Chinese Cabinet (Museum of King Jan III’s Palace at Wilanów, Warsaw, Poland) that was scanned in 2009 using the structured light technique and renovated and scanned again in 2015. Additional tests are performed on a model of the Al Fresco Cabinet in the same museum, scanned using a time-of-flight laser scanner.


2016 ◽  
Vol 13 (3) ◽  
pp. 571-580 ◽  
Author(s):  
Jungsik Park ◽  
Byung-Kuk Seo ◽  
Jong-Il Park

2008 ◽  
Vol 26 (5) ◽  
pp. 673-689 ◽  
Author(s):  
Ke Xu ◽  
Kar Wee Chia ◽  
Adrian David Cheok

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3936 ◽  
Author(s):  
Matija Rossi ◽  
Petar Trslić ◽  
Satja Sivčev ◽  
James Riordan ◽  
Daniel Toal ◽  
...  

Many current and future applications of underwater robotics require real-time sensing and interpretation of the environment. As the vast majority of robots are equipped with cameras, computer vision is playing an increasingly important role it this field. This paper presents the implementation and experimental results of underwater StereoFusion, an algorithm for real-time 3D dense reconstruction and camera tracking. Unlike KinectFusion on which it is based, StereoFusion relies on a stereo camera as its main sensor. The algorithm uses the depth map obtained from the stereo camera to incrementally build a volumetric 3D model of the environment, while simultaneously using the model for camera tracking. It has been successfully tested both in a lake and in the ocean, using two different state-of-the-art underwater Remotely Operated Vehicles (ROVs). Ongoing work focuses on applying the same algorithm to acoustic sensors, and on the implementation of a vision based monocular system with the same capabilities.


2012 ◽  
Vol 38 ◽  
pp. 456-461 ◽  
Author(s):  
S. Suganya ◽  
N.R. Raajan ◽  
M.V. Priya ◽  
A. Jenifer Philomina ◽  
D. Parthiban ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document