scholarly journals Depth from a Motion Algorithm and a Hardware Architecture for Smart Cameras

Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 53 ◽  
Author(s):  
Abiel Aguilar-González ◽  
Miguel Arias-Estrada ◽  
François Berry

Applications such as autonomous navigation, robot vision, and autonomous flying require depth map information of a scene. Depth can be estimated by using a single moving camera (depth from motion). However, the traditional depth from motion algorithms have low processing speeds and high hardware requirements that limit the embedded capabilities. In this work, we propose a hardware architecture for depth from motion that consists of a flow/depth transformation and a new optical flow algorithm. Our optical flow formulation consists in an extension of the stereo matching problem. A pixel-parallel/window-parallel approach where a correlation function based on the sum of absolute difference (SAD) computes the optical flow is proposed. Further, in order to improve the SAD, the curl of the intensity gradient as a preprocessing step is proposed. Experimental results demonstrated that it is possible to reach higher accuracy (90% of accuracy) compared with previous Field Programmable Gate Array (FPGA)-based optical flow algorithms. For the depth estimation, our algorithm delivers dense maps with motion and depth information on all image pixels, with a processing speed up to 128 times faster than that of previous work, making it possible to achieve high performance in the context of embedded applications.

2016 ◽  
Vol 28 (4) ◽  
pp. 523-532 ◽  
Author(s):  
Akihiro Obara ◽  
◽  
Xu Yang ◽  
Hiromasa Oku ◽  

[abstFig src='/00280004/10.jpg' width='300' text='Concept of SLF generated by two projectors' ] Triangulation is commonly used to restore 3D scenes, but its frame of less than 30 fps due to time-consuming stereo-matching is an obstacle for applications requiring that results be fed back in real time. The structured light field (SLF) our group proposed previously reduced the amount of calculation in 3D restoration, realizing high-speed measurement. Specifically, the SLF estimates depth information by projecting information on distance directly to a target. The SLF synthesized as reported, however, presents difficulty in extracting image features for depth estimation. In this paper, we propose synthesizing the SLF using two projectors with a certain layout. Our proposed SLF’s basic properties are based on an optical model. We evaluated the SLF’s performance using a prototype we developed and applied to the high-speed depth estimation of a target moving randomly at a speed of 1000 Hz. We demonstrate the target’s high-speed tracking based on high-speed depth information feedback.


Integration ◽  
2015 ◽  
Vol 51 ◽  
pp. 81-91 ◽  
Author(s):  
Young-Ho Seo ◽  
Ji-Sang Yoo ◽  
Dong-Wook Kim

2013 ◽  
Vol 2013 ◽  
pp. 1-12
Author(s):  
Fang-Hsuan Cheng ◽  
Tze-Yun Sung

A method for estimating the depth information of a general monocular image sequence and then creating a 3D stereo video is proposed. Distinguishing between foreground and background is possible without additional information, and then foreground pixels are moved to create the binocular image. The proposed depth estimation method is based on coarse-to-fine strategy. By applying the CID method in the spatial domain, the sharpness and the contrast of an image can be improved by the distance of the region based on its color. Then a coarse depth map of the image can be generated. An optical-flow method based on temporal information is then used to search and compare the block motion status between previous and current frames, and then the distance of the block can be estimated according to the amount of block motion. Finally, the static and motion depth information is integrated to create the fine depth map. By shifting foreground pixels based on the depth information, a binocular image pair can be created. A sense of 3D stereo can be obtained without glasses by an autostereoscopic 3D display.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 479
Author(s):  
Join Kang ◽  
Seong-Won Lee

Finding depth information with stereo matching using a deep learning algorithm for embedded systems has recently gained significant attention owing to emerging high-performance mobile graphics processing units (GPUs). Several researchers have proposed feasible small-scale CNNs that can run on a local GPU, but they still suffer from low accuracy and/or high computational requirements. In the method proposed in this study, pooling layers with padding and an asymmetric convolution filter are used to reduce computational costs and simultaneously maintain the accuracy of disparity. The patch size and number of layers are adjusted by analyzing the feature and activation maps. The proposed method forms a small-scale network algorithm suitable for a vision system at the edge and still exhibits high-disparity accuracy and low computational loads as compared to existing stereo-matching networks.


Author(s):  
Yong Deng ◽  
Jimin Xiao ◽  
Steven Zhiying Zhou ◽  
Jiashi Feng

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 54
Author(s):  
Peng Liu ◽  
Zonghua Zhang ◽  
Zhaozong Meng ◽  
Nan Gao

Depth estimation is a crucial component in many 3D vision applications. Monocular depth estimation is gaining increasing interest due to flexible use and extremely low system requirements, but inherently ill-posed and ambiguous characteristics still cause unsatisfactory estimation results. This paper proposes a new deep convolutional neural network for monocular depth estimation. The network applies joint attention feature distillation and wavelet-based loss function to recover the depth information of a scene. Two improvements were achieved, compared with previous methods. First, we combined feature distillation and joint attention mechanisms to boost feature modulation discrimination. The network extracts hierarchical features using a progressive feature distillation and refinement strategy and aggregates features using a joint attention operation. Second, we adopted a wavelet-based loss function for network training, which improves loss function effectiveness by obtaining more structural details. The experimental results on challenging indoor and outdoor benchmark datasets verified the proposed method’s superiority compared with current state-of-the-art methods.


2021 ◽  
Vol 11 (12) ◽  
pp. 5383
Author(s):  
Huachen Gao ◽  
Xiaoyu Liu ◽  
Meixia Qu ◽  
Shijie Huang

In recent studies, self-supervised learning methods have been explored for monocular depth estimation. They minimize the reconstruction loss of images instead of depth information as a supervised signal. However, existing methods usually assume that the corresponding points in different views should have the same color, which leads to unreliable unsupervised signals and ultimately damages the reconstruction loss during the training. Meanwhile, in the low texture region, it is unable to predict the disparity value of pixels correctly because of the small number of extracted features. To solve the above issues, we propose a network—PDANet—that integrates perceptual consistency and data augmentation consistency, which are more reliable unsupervised signals, into a regular unsupervised depth estimation model. Specifically, we apply a reliable data augmentation mechanism to minimize the loss of the disparity map generated by the original image and the augmented image, respectively, which will enhance the robustness of the image in the prediction of color fluctuation. At the same time, we aggregate the features of different layers extracted by a pre-trained VGG16 network to explore the higher-level perceptual differences between the input image and the generated one. Ablation studies demonstrate the effectiveness of each components, and PDANet shows high-quality depth estimation results on the KITTI benchmark, which optimizes the state-of-the-art method from 0.114 to 0.084, measured by absolute relative error for depth estimation.


2021 ◽  
Author(s):  
Shahrokh Heidari ◽  
Mitchell Rogers ◽  
Patrice Delmas

Sign in / Sign up

Export Citation Format

Share Document