scholarly journals A SYNTHETIC 3D SCENE FOR THE VALIDATION OF PHOTOGRAMMETRIC ALGORITHMS

Author(s):  
D. Frommholz

<p><strong>Abstract.</strong> This paper describes the construction and composition of a synthetic test world for the validation of photogrammetric algorithms. Since its 3D objects are entirely generated by software, the geometric accuracy of the scene does not suffer from measurement errors which existing real-world ground truth is inherently afflicted with. The resulting data set covers an area of 13188 by 6144 length units and exposes positional residuals as small as the machine epsilon of the double-precision floating point numbers used exclusively for the coordinates. It is colored with high-resolution textures to accommodate the simulation of virtual flight campaigns with large optical sensors and laser scanners in both aerial and close-range scenarios. To specifically support the derivation of image samples and point clouds, the synthetic scene gets stored in the human-readable Alias/Wavefront OBJ and POV-Ray data formats. While conventional rasterization remains possible, using the open-source ray tracer as a render tool facilitates the creation of ideal pinhole bitmaps, consistent digital surface models (DSMs), true ortho-mosaics (TOMs) and orientation metadata without programming knowledge. To demonstrate the application of the constructed 3D scene, example validation recipes are discussed in detail for a state-of-the-art implementation of semi-global matching and a perspective-correct multi-source texture mapper. For the latter, beyond the visual assessment, a statistical evaluation of the achieved texture quality is given.</p>

Author(s):  
A. Georgopoulos ◽  
C. Oikonomou ◽  
E. Adamopoulos ◽  
E. K. Stathopoulou

When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.


2020 ◽  
Author(s):  
Tuomas Yrttimaa ◽  
Ninni Saarinen ◽  
Ville Luoma ◽  
Topi Tanhuanpää ◽  
Ville Kankare ◽  
...  

The feasibility of terrestrial laser scanning (TLS) in characterizing standing trees has been frequently investigated, while less effort has been put in quantifying downed dead wood using TLS. To advance dead wood characterization using TLS, we collected TLS point clouds and downed dead wood information from 20 sample plots (32 m x 32 m in size) located in southern Finland. This data set can be used in developing new algorithms for downed dead wood detection and characterization as well as for understanding spatial patterns of downed dead wood in boreal forests.


Author(s):  
H. Zavar ◽  
H. Arefi ◽  
S. Malihi ◽  
M. Maboudi

Abstract. In this paper we introduce a topology-aware data-driven approach for 3D reconstruction of indoor spaces, which is an active research topic with several practical applications. After separating floor and ceiling, segmentation is followed by computing the α-shapes of the segment. The adjacency graph of all α-shapes is used to find the intersecting planes. By employing a B-rep approach, an initial 3D model is computed. Afterwards, adjacency graph of the intersected planes which constitute the initial model is analyzed in order to refine the 3D model. This leads to a water-tight and topologically correct 3D model. The performance of our proposed approach is qualitatively and quantitatively evaluated on an ISPRS benchmark data set. On this dataset, we achieved 77% completeness, 53% correctness and 1.7–5 cm accuracy with comparison of the final 3D model to the ground truth.


Author(s):  
P. Glira ◽  
N. Pfeifer ◽  
C. Briese ◽  
C. Ressl

Airborne Laser Scanning (ALS) is an efficient method for the acquisition of dense and accurate point clouds over extended areas. To ensure a gapless coverage of the area, point clouds are collected strip wise with a considerable overlap. The redundant information contained in these overlap areas can be used, together with ground-truth data, to re-calibrate the ALS system and to compensate for systematic measurement errors. This process, usually denoted as <i>strip adjustment</i>, leads to an improved georeferencing of the ALS strips, or in other words, to a higher data quality of the acquired point clouds. We present a fully automatic strip adjustment method that (a) uses the original scanner and trajectory measurements, (b) performs an on-the-job calibration of the entire ALS multisensor system, and (c) corrects the trajectory errors individually for each strip. Like in the Iterative Closest Point (ICP) algorithm, correspondences are established iteratively and directly between points of overlapping ALS strips (avoiding a time-consuming segmentation and/or interpolation of the point clouds). The suitability of the method for large amounts of data is demonstrated on the basis of an ALS block consisting of 103 strips.


Author(s):  
G. Pavoni ◽  
M. Palma ◽  
M. Callieri ◽  
M. Dellepiane ◽  
C. Cerrano ◽  
...  

This study presents a practical method to estimate dimensions of <i>Paramuricea clavata</i> colonies using generic photographic datasets collected across wide areas. <i>Paramuricea clavata</i> is a non-rigid, tree-like octocoral; this morphology greatly affects the quality of the sea fans multi-view stereo matching reconstruction, resulting in hazy and incoherent clouds, full of “false” points with random orientation. Therefore, the standard procedure to take measurements over a reconstructed textured surface in 3D space is impractical. Our method overcomes this problem by using quasi-orthorectified images, produced by projecting registered photos on the plane that best fits the point cloud of the colony. The assessments of the measures collected have been performed comparing ground truth data set and time series images of the same set of colonies. The measurement errors fall below the requirements for this type of ecological observations.<br> Compared to previous works, the presented method does not require a detailed reconstruction of individual colonies, but relies on a global multi-view stereo reconstruction performed through a comprehensive photographic coverage of the area of interest, using a lowcost pre-calibrated camera. This approach drastically reduces the time spent working on the field, helping practitioners and scientists in improving efficiency and accuracy in their monitoring plans.


Author(s):  
R. Boerner ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This paper proposes a method to get semantic information of changes in bathymetric point clouds. This method aims for assigning labels to river ground points which determine if either the point can be compared with a reference DEM, if there are no data in the reference or if there are no water points inside the new Data of wet areas of the reference data. This labels can be further used to specify areas where differences of DEMS can be calculated, the comparable areas. The Areas where no reference data is found specify areas where the reference DEM will have a higher variance due to interpolation which should be considered in the comparison. The areas where no water in the new data was found specify areas there no refraction correction in the new data can be done and which should be considered with a higher variance of the ground points or there the water surface should be tried to reconstruct. The proposed approach uses semantic reference data to specify water areas in the new scan. An occupancy analysis is used to specify if voxels of the new data exist in the reference or not. In case of occupancy, the labels of the reference are assigned to the new data and in case of no occupancy, the label of changed data is assigned. A histogram based method is used to separate ground and water points in wet areas and a second occupancy analysis is used to specify the semantic changes in wet areas. The proposed method is evaluated on a proposed data set of the Mangfall area where the ground truth is manually labelled.</p>


Author(s):  
Z. Sun ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.


Author(s):  
A. Georgopoulos ◽  
C. Oikonomou ◽  
E. Adamopoulos ◽  
E. K. Stathopoulou

When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


Sign in / Sign up

Export Citation Format

Share Document