scholarly journals A Multi-Task Deep Learning Framework Coupling Semantic Segmentation and Image Reconstruction for Very High Resolution Imagery

Author(s):  
Maria Papadomanolaki ◽  
Konstantinos Karantzalos ◽  
Maria Vakalopoulou
2018 ◽  
Vol 10 (11) ◽  
pp. 1768 ◽  
Author(s):  
Hui Yang ◽  
Penghai Wu ◽  
Xuedong Yao ◽  
Yanlan Wu ◽  
Biao Wang ◽  
...  

Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods.


Author(s):  
Sangram Ganguly ◽  
Saikat Basu ◽  
Ramakrishna Nemani ◽  
Supratik Mukhopadhyay ◽  
Andrew Michaelis ◽  
...  

2019 ◽  
Vol 8 (11) ◽  
pp. 478 ◽  
Author(s):  
Songbing Wu ◽  
Chun Du ◽  
Hao Chen ◽  
Yingxiao Xu ◽  
Ning Guo ◽  
...  

Road networks play a significant role in modern city management. It is necessary to continually extract current road structure, as it changes rapidly with the development of the city. Due to the success of semantic segmentation based on deep learning in the application of computer vision, extracting road networks from VHR (Very High Resolution) imagery becomes a method of updating geographic databases. The major shortcoming of deep learning methods for road networks extraction is that they need a massive amount of high quality pixel-wise training datasets, which is hard to obtain. Meanwhile, a large amount of different types of VGI (volunteer geographic information) data including road centerline has been accumulated in the past few decades. However, most road centerlines in VGI data lack precise width information and, therefore, cannot be directly applied to conventional supervised deep learning models. In this paper, we propose a novel weakly supervised method to extract road networks from VHR images using only the OSM (OpenStreetMap) road centerline as training data instead of high quality pixel-wise road width label. Large amounts of paired Google Earth images and OSM data are used to validate the approach. The results show that the proposed method can extract road networks from the VHR images both accurately and effectively without using pixel-wise road training data.


2021 ◽  
Vol 13 (13) ◽  
pp. 2508
Author(s):  
Loredana Oreti ◽  
Diego Giuliarelli ◽  
Antonio Tomao ◽  
Anna Barbati

The importance of mixed forests is increasingly recognized on a scientific level, due to their greater productivity and efficiency in resource use, compared to pure stands. However, a reliable quantification of the actual spatial extent of mixed stands on a fine spatial scale is still lacking. Indeed, classification and mapping of mixed populations, especially with semi-automatic procedures, has been a challenging issue up to date. The main objective of this study is to evaluate the potential of Object-Based Image Analysis (OBIA) and Very-High-Resolution imagery (VHR) to detect and map mixed forests of broadleaves and coniferous trees with a Minimum Mapping Unit (MMU) of 500 m2. This study evaluates segmentation-based classification paired with non-parametric method K- nearest-neighbors (K-NN), trained with a dataset independent from the validation one. The forest area mapped as mixed forest canopies in the study area amounts to 11%, with an overall accuracy being equal to 85% and K of 0.78. Better levels of user and producer accuracies (85–93%) are reached in conifer and broadleaved dominated stands. The study findings demonstrate that the very high resolution images (0.20 m of spatial resolutions) can be reliably used to detect the fine-grained pattern of rare mixed forests, thus supporting the monitoring and management of forest resources also on fine spatial scales.


Sign in / Sign up

Export Citation Format

Share Document