scholarly journals ON THE USE OF UAVS IN MINING AND ARCHAEOLOGY - GEO-ACCURATE 3D RECONSTRUCTIONS USING VARIOUS PLATFORMS AND TERRESTRIAL VIEWS

Author(s):  
A. Tscharf ◽  
M. Rumpler ◽  
F. Fraundorfer ◽  
G. Mayer ◽  
H. Bischof

During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to generate highly detailed, accurate and complete reconstructions.

2019 ◽  
Vol 11 (19) ◽  
pp. 2219 ◽  
Author(s):  
Fatemeh Alidoost ◽  
Hossein Arefi ◽  
Federico Tombari

In this study, a deep learning (DL)-based approach is proposed for the detection and reconstruction of buildings from a single aerial image. The pre-required knowledge to reconstruct the 3D shapes of buildings, including the height data as well as the linear elements of individual roofs, is derived from the RGB image using an optimized multi-scale convolutional–deconvolutional network (MSCDN). The proposed network is composed of two feature extraction levels to first predict the coarse features, and then automatically refine them. The predicted features include the normalized digital surface models (nDSMs) and linear elements of roofs in three classes of eave, ridge, and hip lines. Then, the prismatic models of buildings are generated by analyzing the eave lines. The parametric models of individual roofs are also reconstructed using the predicted ridge and hip lines. The experiments show that, even in the presence of noises in height values, the proposed method performs well on 3D reconstruction of buildings with different shapes and complexities. The average root mean square error (RMSE) and normalized median absolute deviation (NMAD) metrics are about 3.43 m and 1.13 m, respectively for the predicted nDSM. Moreover, the quality of the extracted linear elements is about 91.31% and 83.69% for the Potsdam and Zeebrugge test data, respectively. Unlike the state-of-the-art methods, the proposed approach does not need any additional or auxiliary data and employs a single image to reconstruct the 3D models of buildings with the competitive precision of about 1.2 m and 0.8 m for the horizontal and vertical RMSEs over the Potsdam data and about 3.9 m and 2.4 m over the Zeebrugge test data.


Author(s):  
X. Zhuo ◽  
F. Kurz ◽  
P. Reinartz

Manned aircraft has long been used for capturing large-scale aerial images, yet the high costs and weather dependence restrict its availability in emergency situations. In recent years, MAV (Micro Aerial Vehicle) emerged as a novel modality for aerial image acquisition. Its maneuverability and flexibility enable a rapid awareness of the scene of interest. Since these two platforms deliver scene information from different scale and different view, it makes sense to fuse these two types of complimentary imagery to achieve a quick, accurate and detailed description of the scene, which is the main concern of real-time situation awareness. This paper proposes a method to fuse multi-view and multi-scale aerial imagery by establishing a common reference frame. In particular, common features among MAV images and geo-referenced airplane images can be extracted by a scale invariant feature detector like SIFT. From the tie point of geo-referenced images we derive the coordinate of corresponding ground points, which are then utilized as ground control points in global bundle adjustment of MAV images. In this way, the MAV block is aligned to the reference frame. Experiment results show that this method can achieve fully automatic geo-referencing of MAV images even if GPS/IMU acquisition has dropouts, and the orientation accuracy is improved compared to the GPS/IMU based georeferencing. The concept for a subsequent 3D classification method is also described in this paper.


Author(s):  
Z. Li ◽  
B. Wu ◽  
Y. Li

Abstract. Photorealistic three-dimensional (3D) models play an indispensable role in the spatial data infrastructure (SDI) of a smart city. Recent developments in aerial oblique photogrammetry, and the popularity of terrestrial mobile mapping systems (MMSs) offer possibilities for deriving 3D models with centimeter-level accuracy in urban areas. Additionally, advances in image matching and bundle adjustment have allowed 3D models derived from the integration of aerial and ground imagery to overcome typical problems related to 3D mapping in urban areas (e.g., geometric defects, blurred textures on building façades). Nevertheless, this approach may not be suitable for all scenarios owing to innate differences between each platform. Besides, MMS images may not cover regions that cannot be reached by mobile vehicles in urban areas (e.g., narrow alleys, areas far from roads). Meanwhile, backpack systems have garnered attention from the photogrammetry community in recent years due to their flexibility, and regions neglected in previous works can be adequately reconstructed from images collected by backpack systems. This paper presents an approach for effectively integrating multi-source images collected by aerial, MMS, and backpack platforms for seamless 3D mapping in urban areas. The approach includes three main steps: (1) data pre-processing, (2) combined structure-from-motion, and (3) optimal generation of a textured 3D mesh model. The experimental results using aerial, MMS, and backpack datasets collected in a typical urban area in Hong Kong demonstrate the promising performance of the proposed approach. The described work is significant for boosting various types of imagery for integrated 3D mapping in both city scale and street level to facilitate various applications.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2021 ◽  
Vol 13 (14) ◽  
pp. 2656
Author(s):  
Furong Shi ◽  
Tong Zhang

Deep-learning technologies, especially convolutional neural networks (CNNs), have achieved great success in building extraction from areal images. However, shape details are often lost during the down-sampling process, which results in discontinuous segmentation or inaccurate segmentation boundary. In order to compensate for the loss of shape information, two shape-related auxiliary tasks (i.e., boundary prediction and distance estimation) were jointly learned with building segmentation task in our proposed network. Meanwhile, two consistency constraint losses were designed based on the multi-task network to exploit the duality between the mask prediction and two shape-related information predictions. Specifically, an atrous spatial pyramid pooling (ASPP) module was appended to the top of the encoder of a U-shaped network to obtain multi-scale features. Based on the multi-scale features, one regression loss and two classification losses were used for predicting the distance-transform map, segmentation, and boundary. Two inter-task consistency-loss functions were constructed to ensure the consistency between distance maps and masks, and the consistency between masks and boundary maps. Experimental results on three public aerial image data sets showed that our method achieved superior performance over the recent state-of-the-art models.


Author(s):  
Linying Zhou ◽  
Zhou Zhou ◽  
Hang Ning

Road detection from aerial images still is a challenging task since it is heavily influenced by spectral reflectance, shadows and occlusions. In order to increase the road detection accuracy, a proposed method for road detection by GAC model with edge feature extraction and segmentation is studied in this paper. First, edge feature can be extracted using the proposed gradient magnitude with Canny operator. Then, a reconstructed gradient map is applied in watershed transformation method, which is segmented for the next initial contour. Last, with the combination of edge feature and initial contour, the boundary stopping function is applied in the GAC model. The road boundary result can be accomplished finally. Experimental results show, by comparing with other methods in [Formula: see text]-measure system, that the proposed method can achieve satisfying results.


2021 ◽  
Author(s):  
Renato Somma ◽  
Alfredo Trocciola ◽  
Daniele Spizzichino ◽  
Alessandro Fedele ◽  
Gabriele Leoni ◽  
...  

<p>The archaeological site of Villa Arianna - located on Varano Hill, south of Vesuvius - offer tantalizing information regarding first-century AD resilience to hydrogeological risk. Additionally, the site provides an important test case for mitigation efforts of current and future geo-hazard. Villa Arianna, notable in particular for its wall frescoes, is part of a complex of Roman villas built between 89 BC and AD 79 in the ancient coastal resort area of Stabiae. This villa complex is located on a morphological terrace that separates the ruins from the present-day urban center of Castellammare di Stabia. The Varano hill is formed of alternating pyroclastic deposits, from the Vesuvius Complex, and alluvial sediments, from the Sarno River. The area, in AD 79, was completely covered by PDCs from the Plinian eruption of Vesuvius. Due to the geomorphological structure the slope is prone to slope instability phenomena that are mainly represented by earth and debris flows, usually triggered by heavy rainfall. The susceptibility is worsened by changes in hydraulic and land-use conditions mainly caused by lack of maintenance of mitigation works. Villa Arianna is the subject of a joint pilot project of the INGV-ENEA-ISPRA that includes non-invasive monitoring techniques such as the use of UAVs to study the areas of the slope at higher risk of instability. The project, in particular, seeks to implement innovative mitigation solutions that are non-destructive to the cultural heritage. UAVs represent the fastest way to produce high-resolution 3D models of large sites and allow archaeologists to collect accurate spatial data that can be used for 3D GIS analyses. Through this pilot project, we have used detailed 3D models and high-resolution ortho-images for new analyses and documentation of the site and to map the slope instabilities that threatens the Villa Arianna site. Through multi-temporal analyses of different data acquisitions, we intend to define the detailed morphological evolution of the entire Varano slope. These analyses will allow us to highlight priority areas for future low-impact mitigation interventions.</p>


Author(s):  
WANG WEI ◽  
YANG XIN

This paper describes an innovative aerial images segmentation algorithm. The algorithm is based upon the knowledge of image multiscale geometric analysis using contourlet transform, which can extract the image's intrinsic geometrical structure efficiently. The contourlet transform is introduced to represent the most distinguished and the rotation invariant features of the image. A modified Mumford–Shah model is applied to segment the aerial image by a multifeature level set evolution. To avoid possible local minima in the level set evolution, we adjust the weighting coefficients of the multiscale features in different evolution periods, i.e. the global features have bigger weighting coefficients at the beginning stages which roughly define the shape of the contour, then bigger weighting coefficients are assigned to the detailed features for segmenting the precise shape. When the algorithm is applied to segment the aerial images with several classes, satisfied experimental results are achieved by the proposed method.


Author(s):  
Yi-Ta Hsieh ◽  
Shou-Tsung Wu ◽  
Chaur-Tzuhn Chen ◽  
Jan-Chang Chen

The shadows in optical remote sensing images are regarded as image nuisances in numerous applications. The classification and interpretation of shadow area in a remote sensing image are a challenge, because of the reduction or total loss of spectral information in those areas. In recent years, airborne multispectral aerial image devices have been developed 12-bit or higher radiometric resolution data, including Leica ADS-40, Intergraph DMC. The increased radiometric resolution of digital imagery provides more radiometric details of potential use in classification or interpretation of land cover of shadow areas. Therefore, the objectives of this study are to analyze the spectral properties of the land cover in the shadow areas by ADS-40 high radiometric resolution aerial images, and to investigate the spectral and vegetation index differences between the various shadow and non-shadow land covers. According to research findings of spectral analysis of ADS-40 image: (i) The DN values in shadow area are much lower than in nonshadow area; (ii) DN values received from shadowed areas that will also be affected by different land cover, and it shows the possibility of land cover property retrieval as in nonshadow area; (iii) The DN values received from shadowed regions decrease in the visible band from short to long wavelengths due to scattering; (iv) The shadow area NIR of vegetation category also shows a strong reflection; (v) Generally, vegetation indexes (NDVI) still have utility to classify the vegetation and non-vegetation in shadow area. The spectral data of high radiometric resolution images (ADS-40) is potential for the extract land cover information of shadow areas.


2011 ◽  
Vol 6 ◽  
pp. 267-274
Author(s):  
Stanislav Popelka ◽  
Alžběta Brychtová

Olomouc, nowadays a city with 100,000 inhabitants, has always been considered as one of the most prominent Czech cities. It is a social and economical centre, which history started just about the 11th century. The present appearance of the city has its roots in the 18th century, when the city was almost razed to the ground after the Thirty years’ war and a great fire in 1709. After that, the city was rebuilt to a baroque military fortress against Prussia army. At the beginning of the 20th century the majority of the fortress was demolished. Character of the town is dominated by the large number of churches, burgher’s houses and other architecturally significant buildings, like a Holy Trinity Column, a UNESCO World Heritage Site. Aim of this project was to state the most suitable methods of visualization of spatial-temporal change in historical build-up area from the tourist’s point of view, and to design and evaluate possibilities of spatial data acquisition. There are many methods of 2D and 3D visualization which are suitable for depiction of historical and contemporary situation. In the article four approaches are discussed comparison of historical and recent pictures or photos, overlaying historical maps over the orthophoto, enhanced visualization of historical map in large scale using the third dimension and photorealistic 3D models of the same area in different ages. All mentioned methods were geolocalizated using the Google Earth environment and multimedia features were added to enhance the impression of perception. Possibilities of visualization, which were outlined above, were realized on a case study of the Olomouc city. As a source of historical data were used rapport plans of the bastion fortress from the 17th century. The accuracy of historical maps was confirmed by cartometric methods with use of the MapAnalyst software. Registration of the spatial-temporal changes information has a great potential in urban planning or realization of reconstruction and particularly in the propagation of the region and increasing the knowledge of citizens about the history of Olomouc.


Sign in / Sign up

Export Citation Format

Share Document