3d scene reconstruction
Recently Published Documents


TOTAL DOCUMENTS

164
(FIVE YEARS 35)

H-INDEX

13
(FIVE YEARS 2)

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 79
Author(s):  
Jonatán Felipe ◽  
Marta Sigut ◽  
Leopoldo Acosta

U-V disparity is a technique that is commonly used to detect obstacles in 3D scenes, modeling them as a set of vertical planes. In this paper, the authors describe the general lines of a method based on this technique for fully reconstructing 3D scenes, and conduct an analytical study of its performance and sensitivity to errors in the pitch angle of the stereoscopic vision system. The equations of the planes calculated for a given error in this angle yield the deviation with respect to the ideal planes (with a zero error in the angle) for a large test set consisting of planes with different orientations, which is represented graphically to analyze the method’s qualitative and quantitative performance. The relationship between the deviation of the planes and the error in the pitch angle is observed to be linear. Two major conclusions are drawn from this study: first, that the deviation between the calculated and ideal planes is always less than or equal to the error considered in the pitch angle; and second, that even though in some cases the deviation of the plane is zero or very small, the probability that a plane of the scene deviates from the ideal by the greatest amount possible, which matches the error in the pitch angle, is very high.


2021 ◽  
Vol 7 (9) ◽  
pp. 167
Author(s):  
Martin De De Pellegrini ◽  
Lorenzo Orlandi ◽  
Daniele Sevegnani ◽  
Nicola Conci

Indoor environment modeling has become a relevant topic in several application fields, including augmented, virtual, and extended reality. With the digital transformation, many industries have investigated two possibilities: generating detailed models of indoor environments, allowing viewers to navigate through them; and mapping surfaces so as to insert virtual elements into real scenes. The scope of the paper is twofold. We first review the existing state-of-the-art (SoA) of learning-based methods for 3D scene reconstruction based on structure from motion (SFM) that predict depth maps and camera poses from video streams. We then present an extensive evaluation using a recent SoA network, with particular attention on the capability of generalizing on new unseen data of indoor environments. The evaluation was conducted by using the absolute relative (AbsRel) measure of the depth map prediction as the baseline metric.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-15
Author(s):  
Shi-Sheng Huang ◽  
Ze-Yu Ma ◽  
Tai-Jiang Mu ◽  
Hongbo Fu ◽  
Shi-Min Hu

Online 3D semantic segmentation, which aims to perform real-time 3D scene reconstruction along with semantic segmentation, is an important but challenging topic. A key challenge is to strike a balance between efficiency and segmentation accuracy. There are very few deep-learning-based solutions to this problem, since the commonly used deep representations based on volumetric-grids or points do not provide efficient 3D representation and organization structure for online segmentation. Observing that on-surface supervoxels, i.e., clusters of on-surface voxels, provide a compact representation of 3D surfaces and brings efficient connectivity structure via supervoxel clustering, we explore a supervoxel-based deep learning solution for this task. To this end, we contribute a novel convolution operation (SVConv) directly on supervoxels. SVConv can efficiently fuse the multi-view 2D features and 3D features projected on supervoxels during the online 3D reconstruction, and leads to an effective supervoxel-based convolutional neural network, termed as Supervoxel-CNN , enabling 2D-3D joint learning for 3D semantic prediction. With the Supervoxel-CNN , we propose a clustering-then-prediction online 3D semantic segmentation approach. The extensive evaluations on the public 3D indoor scene datasets show that our approach significantly outperforms the existing online semantic segmentation systems in terms of efficiency or accuracy.


Author(s):  
Vladimir V. Kniaz ◽  
Vladimir A. Knyaz ◽  
Evgeny V. Ippolitov ◽  
Mikhail M. Novikov ◽  
Lev Grodzitsky ◽  
...  

2021 ◽  
Vol 61 ◽  
pp. 100817
Author(s):  
Oscar E. Perez-Cham ◽  
Cesar Puente ◽  
Carlos Soubervielle-Montalvo ◽  
Gustavo Olague ◽  
Francisco-Edgar Castillo-Barrera ◽  
...  

2021 ◽  
Vol 23 ◽  
pp. 257-267 ◽  
Author(s):  
Ke Li ◽  
Yuxia Wu ◽  
Yao Xue ◽  
Xueming Qian

Sign in / Sign up

Export Citation Format

Share Document