scholarly journals Real-Time Calibration and Registration Method for Indoor Scene with Joint Depth and Color Camera

Author(s):  
Fengquan Zhang ◽  
Tingshen Lei ◽  
Jinhong Li ◽  
Xingquan Cai ◽  
Xuqiang Shao ◽  
...  

Traditional vision registration technologies require the design of precise markers or rich texture information captured from the video scenes, and the vision-based methods have high computational complexity while the hardware-based registration technologies lack accuracy. Therefore, in this paper, we propose a novel registration method that takes advantages of RGB-D camera to obtain the depth information in real-time, and a binocular system using the Time of Flight (ToF) camera and a commercial color camera is constructed to realize the three-dimensional registration technique. First, we calibrate the binocular system to get their position relationships. The systematic errors are fitted and corrected by the method of B-spline curve. In order to reduce the anomaly and random noise, an elimination algorithm and an improved bilateral filtering algorithm are proposed to optimize the depth map. For the real-time requirement of the system, it is further accelerated by parallel computing with CUDA. Then, the Camshift-based tracking algorithm is applied to capture the real object registered in the video stream. In addition, the position and orientation of the object are tracked according to the correspondence between the color image and the 3D data. Finally, some experiments are implemented and compared using our binocular system. Experimental results are shown to demonstrate the feasibility and effectiveness of our method.

2021 ◽  
Vol 9 ◽  
Author(s):  
Yunpeng Liu ◽  
Xingpeng Yan ◽  
Xinlei Liu ◽  
Xi Wang ◽  
Tao Jing ◽  
...  

In this paper, an optical field coding method for the fusion of real and virtual scenes is proposed to implement an augmented reality (AR)-based holographic stereogram. The occlusion relationship between the real and virtual scenes is analyzed, and a fusion strategy based on instance segmentation and depth determination is proposed. A real three-dimensional (3D) scene sampling system is built, and the foreground contour of the sampled perspective image is extracted by the Mask R-CNN instance segmentation algorithm. The virtual 3D scene is rendered by a computer to obtain the virtual sampled images as well as their depth maps. According to the occlusion relation of the fusion scenes, the pseudo-depth map of the real scene is derived, and the fusion coding of 3D real and virtual scenes information is implemented by the depth information comparison. The optical experiment indicates that AR-based holographic stereogram fabricated by our coding method can reconstruct real and virtual fused 3D scenes with correct occlusion and depth cues on full parallax.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 451 ◽  
Author(s):  
Limin Guan ◽  
Yi Chen ◽  
Guiping Wang ◽  
Xu Lei

Vehicle detection is essential for driverless systems. However, the current single sensor detection mode is no longer sufficient in complex and changing traffic environments. Therefore, this paper combines camera and light detection and ranging (LiDAR) to build a vehicle-detection framework that has the characteristics of multi adaptability, high real-time capacity, and robustness. First, a multi-adaptive high-precision depth-completion method was proposed to convert the 2D LiDAR sparse depth map into a dense depth map, so that the two sensors are aligned with each other at the data level. Then, the You Only Look Once Version 3 (YOLOv3) real-time object detection model was used to detect the color image and the dense depth map. Finally, a decision-level fusion method based on bounding box fusion and improved Dempster–Shafer (D–S) evidence theory was proposed to merge the two results of the previous step and obtain the final vehicle position and distance information, which not only improves the detection accuracy but also improves the robustness of the whole framework. We evaluated our method using the KITTI dataset and the Waymo Open Dataset, and the results show the effectiveness of the proposed depth completion method and multi-sensor fusion strategy.


2011 ◽  
Vol 18 (4) ◽  
pp. 569-574 ◽  
Author(s):  
Masato Hoshino ◽  
Kentaro Uesugi ◽  
James Pearson ◽  
Takashi Sonobe ◽  
Mikiyasu Shirai ◽  
...  

An X-ray stereo imaging system with synchrotron radiation was developed at BL20B2, SPring-8. A portion of a wide X-ray beam was Bragg-reflected by a silicon crystal to produce an X-ray beam which intersects with the direct X-ray beam. Samples were placed at the intersection point of the two beam paths. X-ray stereo images were recorded simultaneously by a detector with a large field of view placed close to the sample. A three-dimensional wire-frame model of a sample was created from the depth information that was obtained from the lateral positions in the stereo image. X-ray stereo angiography of a mouse femoral region was performed as a demonstration of real-time stereo imaging. Three-dimensional arrangements of the femur and blood vessels were obtained.


2012 ◽  
Vol 588-589 ◽  
pp. 1320-1323
Author(s):  
Li Xia Wang

This paper takes the virtual reality technology as a core, has established the housing virtual reality roaming display system, Under the premise of the detailed analysis of system architecture, We focus on how to form the terrain database and the scenery three-dimensional database by using the MultiGen Creator, and call OpenGVS through MSVC to carry on the real-time scene control and the method of the complex special effect realization.


2012 ◽  
Vol 463-464 ◽  
pp. 1147-1150 ◽  
Author(s):  
Constantin Catalin Moldovan ◽  
Ionel Staretu

Object tracking in three dimensional environments is an area of research that has attracted a lot of attention lately, for its potential regarding the interaction between man and machine. Hand gesture detection and recognition, in real time, from video stream, plays a significant role in the human-computer interaction and, on the current digital image processing applications, this represent a difficult task. This paper aims to present a new method for human hand control in virtual environments, by eliminating the need of an external device currently used for hand motion capture and digitization. A first step in this direction would be the detection of human hand, followed by the detection of gestures and their use to control a virtual hand in a virtual environment.


2008 ◽  
Vol 62 (10) ◽  
pp. 1084-1087 ◽  
Author(s):  
Shoko Odake ◽  
Satoshi Fukura ◽  
Hiroyuki Kagi

A three-dimensional (3D) Raman mapping system with a real-time calibration function was developed for detecting stress distributions in solid materials from subtle frequency shifts in Raman spectra. An atomic emission line of neon at 918.3 cm−1 when excited at 514.5 nm was used as a wavenumber standard. An emission spectrum of neon and a Raman spectrum from a sample were introduced into a single polychromator using a bifurcated optical fiber. These two spectra were recorded simultaneously on a charge-coupled device (CCD) detector using double-track mode. Energy deviation induced by the fluctuation of laboratory temperature, etc., was removed effectively using the neon emission line. High stability during long measurements was achieved. By applying curve fitting, positions of the Raman line were determined with precision of about 0.05 cm−1. The present system was applied to measurements of residual pressure around mineral inclusions in a natural diamond: 3D stress mapping was achieved.


Sign in / Sign up

Export Citation Format

Share Document