scholarly journals INLINING 3D RECONSTRUCTION, MULTI-SOURCE TEXTURE MAPPING AND SEMANTIC ANALYSIS USING OBLIQUE AERIAL IMAGERY

Author(s):  
D. Frommholz ◽  
M. Linkiewicz ◽  
A. M. Poznanska

This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for fac¸ade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the fac¸ades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained fac¸ade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and conditionally grown, fused and filtered morphologically. The output polygons are vectorized and reintegrated into the previously reconstructed buildings by sparsely ray-tracing their vertices. Finally the enhanced 3D models get stored as textured geometry for visualization and semantically annotated ”LOD-2.5” CityGML objects for GIS applications.

Author(s):  
D. Frommholz ◽  
M. Linkiewicz ◽  
A. M. Poznanska

This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for fac¸ade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the fac¸ades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained fac¸ade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and conditionally grown, fused and filtered morphologically. The output polygons are vectorized and reintegrated into the previously reconstructed buildings by sparsely ray-tracing their vertices. Finally the enhanced 3D models get stored as textured geometry for visualization and semantically annotated ”LOD-2.5” CityGML objects for GIS applications.


Author(s):  
D. Frommholz ◽  
M. Linkiewicz ◽  
H. Meissner ◽  
D. Dahlke

This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building models. To evaluate the performance of the proposed method a proof-of-concept test on sample structures obtained from real-world data of Heligoland/Germany has been conducted. It revealed good reconstruction accuracy in comparison to the cadastral map, a speed-up in texture atlas optimization and visually attractive render results.


Author(s):  
Leena Matikainen ◽  
Juha Hyyppä ◽  
Paula Litkey

During the last 20 years, airborne laser scanning (ALS), often combined with multispectral information from aerial images, has shown its high feasibility for automated mapping processes. Recently, the first multispectral airborne laser scanners have been launched, and multispectral information is for the first time directly available for 3D ALS point clouds. This article discusses the potential of this new single-sensor technology in map updating, especially in automated object detection and change detection. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from a random forests analysis suggest that the multispectral intensity information is useful for land cover classification, also when considering ground surface objects and classes, such as roads. An out-of-bag estimate for classification error was about 3% for separating classes asphalt, gravel, rocky areas and low vegetation from each other. For buildings and trees, it was under 1%. According to feature importance analyses, multispectral features based on several channels were more useful that those based on one channel. Automatic change detection utilizing the new multispectral ALS data, an old digital surface model (DSM) and old building vectors was also demonstrated. Overall, our first analyses suggest that the new data are very promising for further increasing the automation level in mapping. The multispectral ALS technology is independent of external illumination conditions, and intensity images produced from the data do not include shadows. These are significant advantages when the development of automated classification and change detection procedures is considered.


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


2019 ◽  
Vol 7 (1) ◽  
pp. 1-20
Author(s):  
Fotis Giagkas ◽  
Petros Patias ◽  
Charalampos Georgiadis

The purpose of this study is the photogrammetric survey of a forested area using unmanned aerial vehicles (UAV), and the estimation of the digital terrain model (DTM) of the area, based on the photogrammetrically produced digital surface model (DSM). Furthermore, through the classification of the height difference between a DSM and a DTM, a vegetation height model is estimated, and a vegetation type map is produced. Finally, the generated DTM was used in a hydrological analysis study to determine its suitability compared to the usage of the DSM. The selected study area was the forest of Seih-Sou (Thessaloniki). The DTM extraction methodology applies classification and filtering of point clouds, and aims to produce a surface model including only terrain points (DTM). The method yielded a DTM that functioned satisfactorily as a basis for the hydrological analysis. Also, by classifying the DSM–DTM difference, a vegetation height model was generated. For the photogrammetric survey, 495 aerial images were used, taken by a UAV from a height of ∼200 m. A total of 44 ground control points were measured with an accuracy of 5 cm. The accuracy of the aerial triangulation was approximately 13 cm. The produced dense point cloud, counted 146 593 725 points.


2020 ◽  
Vol 12 (18) ◽  
pp. 3030
Author(s):  
Ram Avtar ◽  
Stanley Anak Suab ◽  
Mohd Shahrizan Syukur ◽  
Alexius Korom ◽  
Deha Agus Umarhadi ◽  
...  

The information on biophysical parameters—such as height, crown area, and vegetation indices such as the normalized difference vegetation index (NDVI) and normalized difference red edge index (NDRE)—are useful to monitor health conditions and the growth of oil palm trees in precision agriculture practices. The use of multispectral sensors mounted on unmanned aerial vehicles (UAV) provides high spatio-temporal resolution data to study plant health. However, the influence of UAV altitude when extracting biophysical parameters of oil palm from a multispectral sensor has not yet been well explored. Therefore, this study utilized the MicaSense RedEdge sensor mounted on a DJI Phantom–4 UAV platform for aerial photogrammetry. Three different close-range multispectral aerial images were acquired at a flight altitude of 20 m, 60 m, and 80 m above ground level (AGL) over the young oil palm plantation area in Malaysia. The images were processed using the structure from motion (SfM) technique in Pix4DMapper software and produced multispectral orthomosaic aerial images, digital surface model (DSM), and point clouds. Meanwhile, canopy height models (CHM) were generated by subtracting DSM and digital elevation models (DEM). Oil palm tree heights and crown projected area (CPA) were extracted from CHM and the orthomosaic. NDVI and NDRE were calculated using the red, red-edge, and near-infrared spectral bands of orthomosaic data. The accuracy of the extracted height and CPA were evaluated by assessing accuracy from a different altitude of UAV data with ground measured CPA and height. Correlations, root mean square deviation (RMSD), and central tendency were used to compare UAV extracted biophysical parameters with ground data. Based on our results, flying at an altitude of 60 m is the best and optimal flight altitude for estimating biophysical parameters followed by 80 m altitude. The 20 m UAV altitude showed a tendency of overestimation in biophysical parameters of young oil palm and is less consistent when extracting parameters among the others. The methodology and results are a step toward precision agriculture in the oil palm plantation area.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Author(s):  
F. Bracci ◽  
M. Drauschke ◽  
S. Kühne ◽  
Z.-C. Márton

Different platforms and sensors are used to derive 3d models of urban scenes. 3d reconstruction from satellite and aerial images are used to derive sparse models mainly showing ground and roof surfaces of entire cities. In contrast to such sparse models, 3d reconstructions from UAV or ground images are much denser and show building facades and street furniture as traffic signs and garbage bins. Furthermore, point clouds may also get acquired with LiDAR sensors. Point clouds do not only differ in the viewpoints, but also in their scales and point densities. Consequently, the fusion of such heterogeneous point clouds is highly challenging. Regarding urban scenes, another challenge is the occurence of only a few parallel planes where it is difficult to find the correct rotation parameters. We discuss the limitations of the general fusion methodology based on an initial alignment step followed by a local coregistration using ICP and present strategies to overcome them.


2021 ◽  
Vol 13 (24) ◽  
pp. 5135
Author(s):  
Yahya Alshawabkeh ◽  
Ahmad Baik ◽  
Ahmad Fallatah

The work described in the paper emphasizes the importance of integrating imagery and laser scanner techniques (TLS) to optimize the geometry and visual quality of Heritage BIM. The fusion-based workflow was approached during the recording of Zee Ain Historical Village in Saudi Arabia. The village is a unique example of traditional human settlements, and represents a complex natural and cultural heritage site. The proposed workflow divides data integration into two levels. At the basic level, UAV photogrammetry with enhanced mobility and visibility is used to map the ragged terrain and supplement TLS point data in upper and unaccusable building zones where shadow data originated. The merging of point clouds ensures that the building’s overall geometry is correctly rebuilt and that data interpretation is improved during HBIM digitization. In addition to the correct geometry, texture mapping is particularly important in the area of cultural heritage. Constructing a realistic texture remains a challenge in HBIM; because the standard texture and materials provided in BIM libraries do not allow for reliable representation of heritage structures, mapping and sharing information are not always truthful. Thereby, at the second level, the workflow proposed true orthophoto texturing method for HBIM models by combining close-range imagery and laser data. True orthophotos have uniform scale that depicts all objects in their respective planimetric positions, providing reliable and realistic mapping. The process begins with the development of a Digital Surface Model (DSM) by sampling TLS 3D points in a regular grid, with each cell uniquely associated with a model point. Then each DSM cell is projected in the corresponding perspective imagery in order to map the relevant spectral information. The methods allow for flexible data fusion and image capture using either a TLS-installed camera or a separate camera at the optimal time and viewpoint for radiometric data. The developed workflows demonstrated adequate results in terms of complete and realistic textured HBIM, allowing for a better understanding of the complex heritage structures.


Author(s):  
S. Song ◽  
R. Qin

Abstract. Image-based 3D modelling are rather mature nowadays with well-acquired images through standard photogrammetric processing pipeline, while fusing 3D dataset generated from images with different views for surface reconstruction remains to be a challenge. Meshing algorithms for image-based 3D dataset requires visibility information for surfaces and such information can be difficult to obtain for 3D point clouds generated from images with different views, sources, resolutions and uncertainties. In this paper, we propose a novel multi-source mesh reconstruction and texture mapping pipeline optimized to address such a challenge. Our key contributions are 1) we extended state-of-the-art image-based surface reconstruction method by incorporating geometric information produced by satellite images to create wide-area surface model. 2) We extended a texture mapping method to accommodate images acquired from different sensors, i.e. side-view perspective images and satellite images. Experiments show that our method creates conforming surface model from these two sources, as well as consistent and well-balanced textures from images with drastically different radiometry (satellite images vs. street-view level images). We compared our proposed pipeline with a typical fusion pipeline - Poisson reconstruction and the results show that our pipeline shows distinctive advantages.


Sign in / Sign up

Export Citation Format

Share Document