Graph Convolutional Networks for the Automated Production of Building Vector Maps From Aerial Images

Author(s):  
Shiqing Wei ◽  
Shunping Ji
2020 ◽  
Author(s):  
Keiller Nogueira ◽  
William Robson Schwartz ◽  
Jefersson Alex Dos Santos

A lot of information may be extracted from the Earth’s surface through aerial images. This information may assist in myriad applications, such as urban planning, crop and forest management, disaster relief, etc. However, the process of distilling this information is strongly based on efficiently encoding the spatial features, a challenging task. Facing this, Deep Learning is able to learn specific data-driven features. This PhD thesis1 introduces deep learning into the remote sensing domain. Specifically, we tackled two main tasks, scene and pixel classification, using Deep Learning to encode spatial features over high-resolution remote sensing images. First, we proposed an architecture and analyze different strategies to exploit Convolutional Networks for image classification. Second, we introduced a network and proposed a new strategy to better exploit multi-context information in order to improve pixelwise classification. Finally, we proposed a new network based on morphological operations towards better learning of some relevant visual features.


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 583 ◽  
Author(s):  
Khang Nguyen ◽  
Nhut T. Huynh ◽  
Phat C. Nguyen ◽  
Khanh-Duy Nguyen ◽  
Nguyen D. Vo ◽  
...  

Unmanned aircraft systems or drones enable us to record or capture many scenes from the bird’s-eye view and they have been fast deployed to a wide range of practical domains, i.e., agriculture, aerial photography, fast delivery and surveillance. Object detection task is one of the core steps in understanding videos collected from the drones. However, this task is very challenging due to the unconstrained viewpoints and low resolution of captured videos. While deep-learning modern object detectors have recently achieved great success in general benchmarks, i.e., PASCAL-VOC and MS-COCO, the robustness of these detectors on aerial images captured by drones is not well studied. In this paper, we present an evaluation of state-of-the-art deep-learning detectors including Faster R-CNN (Faster Regional CNN), RFCN (Region-based Fully Convolutional Networks), SNIPER (Scale Normalization for Image Pyramids with Efficient Resampling), Single-Shot Detector (SSD), YOLO (You Only Look Once), RetinaNet, and CenterNet for the object detection in videos captured by drones. We conduct experiments on VisDrone2019 dataset which contains 96 videos with 39,988 annotated frames and provide insights into efficient object detectors for aerial images.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1983
Author(s):  
Weipeng Shi ◽  
Wenhu Qin ◽  
Zhonghua Yun ◽  
Peng Ping ◽  
Kaiyang Wu ◽  
...  

It is essential for researchers to have a proper interpretation of remote sensing images (RSIs) and precise semantic labeling of their component parts. Although FCN (Fully Convolutional Networks)-like deep convolutional network architectures have been widely applied in the perception of autonomous cars, there are still two challenges in the semantic segmentation of RSIs. The first is to identify details in high-resolution images with complex scenes and to solve the class-mismatch issues; the second is to capture the edge of objects finely without being confused by the surroundings. HRNET has the characteristics of maintaining high-resolution representation by fusing feature information with parallel multi-resolution convolution branches. We adopt HRNET as a backbone and propose to incorporate the Class-Oriented Region Attention Module (CRAM) and Class-Oriented Context Fusion Module (CCFM) to analyze the relationships between classes and patch regions and between classes and local or global pixels, respectively. Thus, the perception capability of the model for the detailed part in the aerial image can be enhanced. We leverage these modules to develop an end-to-end semantic segmentation model for aerial images and validate it on the ISPRS Potsdam and Vaihingen datasets. The experimental results show that our model improves the baseline accuracy and outperforms some commonly used CNN architectures.


Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1915 ◽  
Author(s):  
Weigang Song ◽  
Baojiang Zhong ◽  
Xun Sun

In aerial images, corner points can be detected to describe the structural information of buildings for city modeling, geo-localization, and so on. For this specific vision task, the existing generic corner detectors perform poorly, as they are incapable of distinguishing corner points on buildings from those on other objects such as trees and shadows. Recently, fully convolutional networks (FCNs) have been developed for semantic image segmentation that are able to recognize a designated kind of object through a training process with a manually labeled dataset. Motivated by this achievement, an FCN-based approach is proposed in the present work to detect building corners in aerial images. First, a DeepLab model comprised of improved FCNs and fully-connected conditional random fields (CRFs) is trained end-to-end for building region segmentation. The segmentation is then further improved by using a morphological opening operation to increase its accuracy. Corner points are finally detected on the contour curves of building regions by using a scale-space detector. Experimental results show that the proposed building corner detection approach achieves an F-measure of 0.83 in the test image set and outperforms a number of state-of-the-art corner detectors by a large margin.


2000 ◽  
pp. 16-25
Author(s):  
E. I. Rachkovskaya ◽  
S. S. Temirbekov ◽  
R. E. Sadvokasov

Capabilities of the remote sensing methods for making maps of actual and potential vegetation, and assessment of the extent of anthropogenic transformation of rangelands are presented in the paper. Study area is a large intermountain depression, which is under intensive agricultural use. Color photographs have been made by Aircraft camera Wild Heerburg RC-30 and multispectral scanner Daedalus (AMS) digital aerial data (6 bands, 3.5m resolution) have been used for analysis of distribution and assessment of the state of vegetation. Digital data were processed using specialized program ENVI 3.0. Main stages of the development of cartographic models have been described: initial processing of the aerial images and their visualization, preliminary pre-field interpretation (classification) of the images on the basis of unsupervised automated classification, field studies (geobotanical records and GPS measurements at the sites chosen at previous stage). Post-field stage had the following sub-stages: final geometric correction of the digital images, elaboration of the classification system for the main mapping subdivisions, final supervised automated classification on the basis of expert assessment. By systematizing clusters of the obtained classified image the cartographic models of the study area have been made. Application of the new technology of remote sensing allowed making qualitative and quantitative assessment of modern state of rangelands.


Author(s):  
A.V. Stomatov ◽  
D.V. Stomatov ◽  
P.V. Ivanov ◽  
V.V. Marchenko ◽  
E.V. Piitsky ◽  
...  

In this work, the authors studied and compared the two main methods used in dental practice for the automated production of orthopedic structures: the widely used CAD / CAM milling method and the 3D printing technology. As an object of research, temporary crowns were used, which were made on the basis of the same digital model: a) by the method of CAD / CAM milling from polymethylmethacrylate disks; b) by 3D printing from photopolymer resin based on LCD technology. Comparison of production methods and finished designs was carried out according to the following characteristics: strength, durability, aesthetic qualities, accuracy of orthopedic designs, etc. According to the results of the study, it was concluded that 3D printing can be a good alternative to CAD / CAM milling in solving problems of temporary prosthetics.


Sign in / Sign up

Export Citation Format

Share Document