scholarly journals ENHANCING GEOMETRIC EDGE DETAILS IN MVS RECONSTRUCTION

Author(s):  
E. K. Stathopoulou ◽  
S. Rigon ◽  
R. Battisti ◽  
F. Remondino

Abstract. Mesh models generated by multi view stereo (MVS) algorithms often fail to represent in an adequate manner the sharp, natural edge details of the scene. The harsh depth discontinuities of edge regions are eventually a challenging task for dense reconstruction, while vertex displacement during mesh refinement frequently leads to smoothed edges that do not coincide with the fine details of the scene. Meanwhile, 3D edges have been used for scene representation, particularly man-made built environments, which are dominated by regular planar and linear structures. Indeed, 3D edge detection and matching are commonly exploited either to constrain camera pose estimation, or to generate an abstract representation of the most salient parts of the scene, and even to support mesh reconstruction. In this work, we attempt to jointly use 3D edge extraction and MVS mesh generation to promote edge detail preservation in the final result. Salient 3D edges of the scene are reconstructed with state-of-the-art algorithms and integrated in the dense point cloud to be further used in order to support the mesh triangulation step. Experimental results on benchmark dataset sequences using metric and appearance-based measures are performed in order to evaluate our hypothesis.

2019 ◽  
Vol 31 (1) ◽  
pp. 39
Author(s):  
Yongchuan Zheng ◽  
Boliang Guan ◽  
Shujin Lin ◽  
Xiaonan Luo ◽  
Ruomei Wang

2021 ◽  
Vol 13 (22) ◽  
pp. 4569
Author(s):  
Liyang Zhou ◽  
Zhuang Zhang ◽  
Hanqing Jiang ◽  
Han Sun ◽  
Hujun Bao ◽  
...  

This paper presents an accurate and robust dense 3D reconstruction system for detail preserving surface modeling of large-scale scenes from multi-view images, which we named DP-MVS. Our system performs high-quality large-scale dense reconstruction, which preserves geometric details for thin structures, especially for linear objects. Our framework begins with a sparse reconstruction carried out by an incremental Structure-from-Motion. Based on the reconstructed sparse map, a novel detail preserving PatchMatch approach is applied for depth estimation of each image view. The estimated depth maps of multiple views are then fused to a dense point cloud in a memory-efficient way, followed by a detail-aware surface meshing method to extract the final surface mesh of the captured scene. Experiments on ETH3D benchmark show that the proposed method outperforms other state-of-the-art methods on F1-score, with the running time more than 4 times faster. More experiments on large-scale photo collections demonstrate the effectiveness of the proposed framework for large-scale scene reconstruction in terms of accuracy, completeness, memory saving, and time efficiency.


Author(s):  
Louis Wiesmann ◽  
Andres Milioto ◽  
Xieyuanli Chen ◽  
Cyrill Stachniss ◽  
Jens Behley
Keyword(s):  

2021 ◽  
Vol 13 (10) ◽  
pp. 1985
Author(s):  
Emre Özdemir ◽  
Fabio Remondino ◽  
Alessandro Golkar

With recent advances in technologies, deep learning is being applied more and more to different tasks. In particular, point cloud processing and classification have been studied for a while now, with various methods developed. Some of the available classification approaches are based on specific data source, like LiDAR, while others are focused on specific scenarios, like indoor. A general major issue is the computational efficiency (in terms of power consumption, memory requirement, and training/inference time). In this study, we propose an efficient framework (named TONIC) that can work with any kind of aerial data source (LiDAR or photogrammetry) and does not require high computational power while achieving accuracy on par with the current state of the art methods. We also test our framework for its generalization ability, showing capabilities to learn from one dataset and predict on unseen aerial scenarios.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3416
Author(s):  
Pawel Burdziakowski ◽  
Angelika Zakrzewska

The continuous and intensive development of measurement technologies for reality modelling with appropriate data processing algorithms is currently being observed. The most popular methods include remote sensing techniques based on reflected-light digital cameras, and on active methods in which the device emits a beam. This research paper presents the process of data integration from terrestrial laser scanning (TLS) and image data from an unmanned aerial vehicle (UAV) that was aimed at the spatial mapping of a complicated steel structure, and a new automatic structure extraction method. We proposed an innovative method to minimize the data size and automatically extract a set of points (in the form of structural elements) that is vital from the perspective of engineering and comparative analyses. The outcome of the research was a complete technology for the acquisition of precise information with regard to complex and high steel structures. The developed technology includes such elements as a data integration method, a redundant data elimination method, integrated photogrammetric data filtration and a new adaptive method of structure edge extraction. In order to extract significant geometric structures, a new automatic and adaptive algorithm for edge extraction from a random point cloud was developed and presented herein. The proposed algorithm was tested using real measurement data. The developed algorithm is able to realistically reduce the amount of redundant data and correctly extract stable edges representing the geometric structures of a studied object without losing important data and information. The new algorithm automatically self-adapts to the received data. It does not require any pre-setting or initial parameters. The detection threshold is also adaptively selected based on the acquired data.


Proceedings ◽  
2018 ◽  
Vol 2 (18) ◽  
pp. 1193
Author(s):  
Roi Santos ◽  
Xose Pardo ◽  
Xose Fdez-Vidal

The increasing use of autonomous UAVs inside buildings and around human-made structures demands new accurate and comprehensive representation of their operation environments. Most of the 3D scene abstraction methods use invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with inaccurate directions limit the understanding of scenes as those that include environments with poor texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a full method based on the matching of lines that provides a complementary approach to state-of-the-art methods when facing 3D scene representation of poor texture environments for future autonomous UAV.


2021 ◽  
pp. 107057
Author(s):  
Ping Wang ◽  
Li Liu ◽  
Huaxiang Zhang ◽  
Tianshi Wang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document