disparity range
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 4)

H-INDEX

6
(FIVE YEARS 0)

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1045
Author(s):  
Jaecheol Jeong ◽  
Suyeon Jeon ◽  
Yong Seok Heo

Recent stereo matching networks adopt 4D cost volumes and 3D convolutions for processing those volumes. Although these methods show good performance in terms of accuracy, they have an inherent disadvantage in that they require great deal of computing resources and memory. These requirements limit their applications for mobile environments, which are subject to inherent computing hardware constraints. Both accuracy and consumption of computing resources are important, and improving both at the same time is a non-trivial task. To deal with this problem, we propose a simple yet efficient network, called Sequential Feature Fusion Network (SFFNet) which sequentially generates and processes the cost volume using only 2D convolutions. The main building block of our network is a Sequential Feature Fusion (SFF) module which generates 3D cost volumes to cover a part of the disparity range by shifting and concatenating the target features, and processes the cost volume using 2D convolutions. A series of the SFF modules in our SFFNet are designed to gradually cover the full disparity range. Our method prevents heavy computations and allows for efficient generation of an accurate final disparity map. Various experiments show that our method has an advantage in terms of accuracy versus efficiency compared to other networks.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1430
Author(s):  
Xiaogang Jia ◽  
Wei Chen ◽  
Zhengfa Liang ◽  
Xin Luo ◽  
Mingfei Wu ◽  
...  

Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to improve performance and reduce running time. We leverage a 2D encoder-decoder network to generate a rough disparity map and construct a disparity range to guide the 3D aggregation network, which can significantly improve the accuracy and reduce the computational cost. We use a stacked hourglass structure to refine the disparity from coarse to fine. We evaluated our method on three public datasets. According to the KITTI official website results, Our network can generate an accurate result in 80 ms on a modern GPU. Compared to other 2D stereo networks (AANet, DeepPruner, FADNet, etc.), our network has a big improvement in accuracy. Meanwhile, it is significantly faster than other 3D stereo networks (5× than PSMNet, 7.5× than CSN and 22.5× than GANet, etc.), demonstrating the effectiveness of our method.


2020 ◽  
Vol 12 (24) ◽  
pp. 4025
Author(s):  
Rongshu Tao ◽  
Yuming Xiang ◽  
Hongjian You

As an essential step in 3D reconstruction, stereo matching still faces unignorable problems due to the high resolution and complex structures of remote sensing images. Especially in occluded areas of tall buildings and textureless areas of waters and woods, precise disparity estimation has become a difficult but important task. In this paper, we develop a novel edge-sense bidirectional pyramid stereo matching network to solve the aforementioned problems. The cost volume is constructed from negative to positive disparities since the disparity range in remote sensing images varies greatly and traditional deep learning networks only work well for positive disparities. Then, the occlusion-aware maps based on the forward-backward consistency assumption are applied to reduce the influence of the occluded area. Moreover, we design an edge-sense smoothness loss to improve the performance of textureless areas while maintaining the main structure. The proposed network is compared with two baselines. The experimental results show that our proposed method outperforms two methods, DenseMapNet and PSMNet, in terms of averaged endpoint error (EPE) and the fraction of erroneous pixels (D1), and the improvements in occluded and textureless areas are significant.


2019 ◽  
Author(s):  
Robert F Hess ◽  
Rebecca Dillon ◽  
Rifeng Ding ◽  
Jiawei Zhou

AbstractSignificance statementApplied applications for occupational screening, clinical tests should assess sensitivity to the sign as well as the magnitude of disparity.PurposeTo determine why the high incidence of stereo anomaly found using laboratory tests with polarity-based increment judgements (i.e., depth sign) is not reflected in clinical measurements that involve single-polarity incremental judgements (i.e., depth magnitude).MethodsAn iPod-based measurement that involved the detection of an oriented shape defined by a single polarity-depth increment within a random dot display was used. A staircase procedure was used to gather sufficient trials to derive a meaningful measure of variance for the measurement of stereopsis over a large disparity range. Forty-five adults with normal binocular vision (20 - 65 years old) and normal or corrected-to-normal (0 logMAR or better) monocular vision participated in this study.ResultsObservers’ stereo acuities ranged between 10 and 100 arc seconds, and were normally distributed on a log scale (p = 0.90, 2-tailed Shapiro-Wilk test). The present results using a single polarity depth increment task (i.e., depth magnitude) show a similar distribution to those using a similar task using the Randot preschool stereo test on individuals between the ages of 19-35 using either the 4-book test (n = 33) or the 3-book test (n = 40), but very different results when the iPod test involved a polarity-based increment judgement (i.e., depth sign).ConclusionsThe present clinical stereo tests are based on magnitude judgements and are unable to detect the high percentage of stereo anomalous individuals in the normal population revealed using depth sign judgements.


2018 ◽  
Vol 38 (8) ◽  
pp. 0811001
Author(s):  
胡佳洁 Hu Jiajie ◽  
李素梅 Li Sumei ◽  
常永莉 Chang Yongli ◽  
侯春萍 Hou Chunping

2016 ◽  
Author(s):  
Xiongbiao Luo ◽  
Uditha L. Jayarathne ◽  
A. Jonathan McLeod ◽  
Stephen E. Pautler ◽  
Christopher M. Schlacta ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document