scholarly journals Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery

2019 ◽  
Vol 11 (21) ◽  
pp. 2505 ◽  
Author(s):  
Crommelinck ◽  
Koeva ◽  
Yang ◽  
Vosselman

Cadastral boundaries are often demarcated by objects that are visible in remote sensing imagery. Indirect surveying relies on the delineation of visible parcel boundaries from such images. Despite advances in automated detection and localization of objects from images, indirect surveying is rarely automated and relies on manual on-screen delineation. We have previously introduced a boundary delineation workflow, comprising image segmentation, boundary classification and interactive delineation that we applied on Unmanned Aerial Vehicle (UAV) data to delineate roads. In this study, we improve each of these steps. For image segmentation, we remove the need to reduce the image resolution and we limit over-segmentation by reducing the number of segment lines by 80% through filtering. For boundary classification, we show how Convolutional Neural Networks (CNN) can be used for boundary line classification, thereby eliminating the previous need for Random Forest (RF) feature generation and thus achieving 71% accuracy. For interactive delineation, we develop additional and more intuitive delineation functionalities that cover more application cases. We test our approach on more varied and larger data sets by applying it to UAV and aerial imagery of 0.02–0.25 m resolution from Kenya, Rwanda and Ethiopia. We show that it is more effective in terms of clicks and time compared to manual delineation for parcels surrounded by visible boundaries. Strongest advantages are obtained for rural scenes delineated from aerial imagery, where the delineation effort per parcel requires 38% less time and 80% fewer clicks compared to manual delineation.

Author(s):  
Lauren Gillespie ◽  
Megan Ruffley ◽  
Moisés Expósito-Alonso

Accurately mapping biodiversity at high resolution across ecosystems has been a historically difficult task. One major hurdle to accurate biodiversity modeling is that there is a power law relationship between the abundance of different types of species in an environment, with few species being relatively abundant while many species are more rare. This “commonness of rarity,” confounded with differential detectability of species, can lead to misestimations of where a species lives. To overcome these confounding factors, many biodiversity models employ species distribution models (SDMs) to predict the full extent of where a species lives, using observations of where a species has been found, correlated with environmental variables. Most SDMs use bioclimatic environmental variables as the dependent variable to predict a species’ range, but these approaches often rely on biased pseudo-absence generation methods and model species using coarse-grained bioclimatic variables with a useful resolution floor of 1 km-pixel. Here, we pair iNaturalist citizen science plant observations from the Global Biodiversity Information Facility with RGB-Infrared aerial imagery from the National Aerial Imagery Program to develop a deep convolutional neural network model that can predict the presence of nearly 2,500 plant species across California. We utilize a state-of-the-art multilabel image recognition model from the computer vision community, paired with a cutting-edge multilabel classification loss, which leads to comparable or better accuracy to traditional SDM models, but at a resolution of 250m (Ben-Baruch et al. 2020, Ridnik et al. 2020). Furthermore, this deep convolutional model is able to accurately predict species presence across multiple biomes of California with good accuracy and can be used to build a plant biodiversity map across California with unparalleled accuracy. Given the widespread availability of citizen science observations and remote sensing imagery across the globe, this deep learning-enabled method could be deployed to automatically map biodiversity at large scales.


2019 ◽  
Vol 11 (20) ◽  
pp. 2380 ◽  
Author(s):  
Liu ◽  
Luo ◽  
Huang ◽  
Hu ◽  
Sun ◽  
...  

Deep convolutional neural networks have promoted significant progress in building extraction from high-resolution remote sensing imagery. Although most of such work focuses on modifying existing image segmentation networks in computer vision, we propose a new network in this paper, Deep Encoding Network (DE-Net), that is designed for the very problem based on many lately introduced techniques in image segmentation. Four modules are used to construct DE-Net: the inceptionstyle downsampling modules combining a striding convolution layer and a max-pooling layer, the encoding modules comprising six linear residual blocks with a scaled exponential linear unit (SELU) activation function, the compressing modules reducing the feature channels, and a densely upsampling module that enables the network to encode spatial information inside feature maps. Thus, DE-Net achieves stateoftheart performance on the WHU Building Dataset in recall, F1-Score, and intersection over union (IoU) metrics without pretraining. It also outperformed several segmentation networks in our self-built Suzhou Satellite Building Dataset. The experimental results validate the effectiveness of DE-Net on building extraction from aerial imagery and satellite imagery. It also suggests that given enough training data, designing and training a network from scratch may excel fine-tuning models pre-trained on datasets unrelated to building extraction.


2020 ◽  
Vol 12 (7) ◽  
pp. 1085 ◽  
Author(s):  
Weixing Zhang ◽  
Anna K. Liljedahl ◽  
Mikhail Kanevskiy ◽  
Howard E. Epstein ◽  
Benjamin M. Jones ◽  
...  

State-of-the-art deep learning technology has been successfully applied to relatively small selected areas of very high spatial resolution (0.15 and 0.25 m) optical aerial imagery acquired by a fixed-wing aircraft to automatically characterize ice-wedge polygons (IWPs) in the Arctic tundra. However, any mapping of IWPs at regional to continental scales requires images acquired on different sensor platforms (particularly satellite) and a refined understanding of the performance stability of the method across sensor platforms through reliable evaluation assessments. In this study, we examined the transferability of a deep learning Mask Region-Based Convolutional Neural Network (R-CNN) model for mapping IWPs in satellite remote sensing imagery (~0.5 m) covering 272 km2 and unmanned aerial vehicle (UAV) (0.02 m) imagery covering 0.32 km2. Multi-spectral images were obtained from the WorldView-2 satellite sensor and pan-sharpened to ~0.5 m, and a 20 mp CMOS sensor camera onboard a UAV, respectively. The training dataset included 25,489 and 6022 manually delineated IWPs from satellite and fixed-wing aircraft aerial imagery near the Arctic Coastal Plain, northern Alaska. Quantitative assessments showed that individual IWPs were correctly detected at up to 72% and 70%, and delineated at up to 73% and 68% F1 score accuracy levels for satellite and UAV images, respectively. Expert-based qualitative assessments showed that IWPs were correctly detected at good (40–60%) and excellent (80–100%) accuracy levels for satellite and UAV images, respectively, and delineated at excellent (80–100%) level for both images. We found that (1) regardless of spatial resolution and spectral bands, the deep learning Mask R-CNN model effectively mapped IWPs in both remote sensing satellite and UAV images; (2) the model achieved a better accuracy in detection with finer image resolution, such as UAV imagery, yet a better accuracy in delineation with coarser image resolution, such as satellite imagery; (3) increasing the number of training data with different resolutions between the training and actual application imagery does not necessarily result in better performance of the Mask R-CNN in IWPs mapping; (4) and overall, the model underestimates the total number of IWPs particularly in terms of disjoint/incomplete IWPs.


2020 ◽  
Vol 12 (2) ◽  
pp. 213 ◽  
Author(s):  
Chengming Zhang ◽  
Yan Chen ◽  
Xiaoxia Yang ◽  
Shuai Gao ◽  
Feng Li ◽  
...  

When extracting land-use information from remote sensing imagery using image segmentation, obtaining fine edges for extracted objects is a key problem that is yet to be solved. In this study, we developed a new weight feature value convolutional neural network (WFCNN) to perform fine remote sensing image segmentation and extract improved land-use information from remote sensing imagery. The WFCNN includes one encoder and one classifier. The encoder obtains a set of spectral features and five levels of semantic features. It uses the linear fusion method to hierarchically fuse the semantic features, employs an adjustment layer to optimize every level of fused features to ensure the stability of the pixel features, and combines the fused semantic and spectral features to form a feature graph. The classifier then uses a Softmax model to perform pixel-by-pixel classification. The WFCNN was trained using a stochastic gradient descent algorithm; the former and two variants were subject to experimental testing based on Gaofen 6 images and aerial images that compared them with the commonly used SegNet, U-NET, and RefineNet models. The accuracy, precision, recall, and F1-Score of the WFCNN were higher than those of the other models, indicating certain advantages in pixel-by-pixel segmentation. The results clearly show that the WFCNN can improve the accuracy and automation level of large-scale land-use mapping and the extraction of other information using remote sensing imagery.


2019 ◽  
Vol 19 (01) ◽  
pp. 1950002 ◽  
Author(s):  
P. K. Dutta

Classification of remote sensing spatial information from multi spectral satellite imagery can be used to obtain multiple representation of the image and capture different structure lineaments. Pixels are grouped using clustering and morphology based segmentation for region based spatial information. This is used to calculate the spatial features of the contiguous regions by classifying the region into the statistics of the pixel properties. In the proposed work, analysis of Google Earth images for identification of morphological patterns of the river flow is done for remote sensing image using graph-cuts. Multi-temporal satellite images acquired from Google Earth to identify the digital elevation is used to formulate the energy function from images to compare the displacement in pixel value using similarity measure. A method is proposed to solve non-rigid image transformation via graph-cuts algorithm by modeling the registration process as a discrete labeling problem. A displacement vector associated to each pixel in the source image indicates the corresponding position in the moving image. The transformation matrix produced from change in the intensity of the pixels for a region is then optimized for energy minimization by using the graph-cuts algorithm and demon registration technique. The proposed study enhances the advantages of regional segmentation in order to know homogeneous areas for optimal image segmentation and digital footprints for change in the river bed patterns by identifying the change in LANDSAT data from temporal satellite images. By applying the proposed multi-level registration method, the number of labels used in each level is greatly reduced due to lower image resolution being used in coarser levels. The results demonstrate that the lineament detection for better accuracy compared to traditional sources of lineament identification methods. It has provided better geotectonic understanding of Cudappah rock in Ahobhilam with Quartzite. The imprints of Eastern Ghat orogeny are seen in upper stream section through a graph cut based segmentation approach.


Sign in / Sign up

Export Citation Format

Share Document