scholarly journals TSFE-Net: Two-Stream Feature Extraction Networks for Active Stereo Matching

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 33954-33962
Author(s):  
Haojie Zeng ◽  
Bin Wang ◽  
Xiaoping Zhou ◽  
Xiaojing Sun ◽  
Longxiang Huang ◽  
...  
2013 ◽  
Vol 718-720 ◽  
pp. 1108-1112
Author(s):  
Jian Li ◽  
Cheng Yan Zhang ◽  
Xue Li Xu ◽  
Hai Feng Chen

A body-size measurement method based on checkerboard matching is proposed. First, calibrated cameras are used to acquire two body images after projecting chess boards on human body with projector. Then, the parallax of the two images is got by feature extraction and stereo matching. Finally, we can calculate the 3D coordinates of the human body according to the principle of binocular vision to complete the acquisition of body size. The result shows that measurement error is ± 4%. This study can measure automatically and improve precision compared with traditional methods while it has low-cost, simple operation compared with the non-contact measurement. And the results accuracy can meet its general application in practice.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1881
Author(s):  
Yuhui Chang ◽  
Jiangtao Xu ◽  
Zhiyuan Gao

To improve the accuracy of stereo matching, the multi-scale dense attention network (MDA-Net) is proposed. The network introduces two novel modules in the feature extraction stage to achieve better exploit of context information: dual-path upsampling (DU) block and attention-guided context-aware pyramid feature extraction (ACPFE) block. The DU block is introduced to fuse different scale feature maps. It introduces sub-pixel convolution to compensate for the loss of information caused by the traditional interpolation upsampling method. The ACPFE block is proposed to extract multi-scale context information. Pyramid atrous convolution is adopted to exploit multi-scale features and the channel-attention is used to fuse the multi-scale features. The proposed network has been evaluated on several benchmark datasets. The three-pixel-error evaluated over all ground truth pixels is 2.10% on KITTI 2015 dataset. The experiment results prove that MDA-Net achieves state-of-the-art accuracy on KITTI 2012 and 2015 datasets.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0251657
Author(s):  
Zedong Huang ◽  
Jinan Gu ◽  
Jing Li ◽  
Xuefei Yu

Deep learning based on a convolutional neural network (CNN) has been successfully applied to stereo matching. Compared with the traditional method, the speed and accuracy of this method have been greatly improved. However, the existing stereo matching framework based on a CNN often encounters two problems. First, the existing stereo matching network has many parameters, which leads to the matching running time being too long. Second, the disparity estimation is inadequate in some regions where reflections, repeated textures, and fine structures may lead to ill-posed problems. Through the lightweight improvement of the PSMNet (Pyramid Stereo Matching Network) model, the common matching effect of ill-conditioned areas such as repeated texture areas and weak texture areas is solved. In the feature extraction part, ResNeXt is introduced to learn unitary feature extraction, and the ASPP (Atrous Spatial Pyramid Pooling) module is trained to extract multiscale spatial feature information. The feature fusion module is designed to effectively fuse the feature information of different scales to construct the matching cost volume. The improved 3D CNN uses the stacked encoding and decoding structure to further regularize the matching cost volume and obtain the corresponding relationship between feature points under different parallax conditions. Finally, the disparity map is obtained by a regression. We evaluate our method on the Scene Flow, KITTI 2012, and KITTI 2015 stereo datasets. The experiments show that the proposed stereo matching network achieves a comparable prediction accuracy and much faster running speed compared with PSMNet.


2021 ◽  
Author(s):  
Aixin Chong ◽  
Hui Yin ◽  
Yanting Liu ◽  
Jin Wan ◽  
Zhihao Liu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document