outline extraction
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 5)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 13 (16) ◽  
pp. 3187
Author(s):  
Xinchun Wei ◽  
Xing Li ◽  
Wei Liu ◽  
Lianpeng Zhang ◽  
Dayu Cheng ◽  
...  

Deep learning techniques have greatly improved the efficiency and accuracy of building extraction using remote sensing images. However, high-quality building outline extraction results that can be applied to the field of surveying and mapping remain a significant challenge. In practice, most building extraction tasks are manually executed. Therefore, an automated procedure of a building outline with a precise position is required. In this study, we directly used the U2-net semantic segmentation model to extract the building outline. The extraction results showed that the U2-net model can provide the building outline with better accuracy and a more precise position than other models based on comparisons with semantic segmentation models (Segnet, U-Net, and FCN) and edge detection models (RCF, HED, and DexiNed) applied for two datasets (Nanjing and Wuhan University (WHU)). We also modified the binary cross-entropy loss function in the U2-net model into a multiclass cross-entropy loss function to directly generate the binary map with the building outline and background. We achieved a further refined outline of the building, thus showing that with the modified U2-net model, it is not necessary to use non-maximum suppression as a post-processing step, as in the other edge detection models, to refine the edge map. Moreover, the modified model is less affected by the sample imbalance problem. Finally, we created an image-to-image program to further validate the modified U2-net semantic segmentation model for building outline extraction.


2020 ◽  
Vol 106 ◽  
pp. 107447 ◽  
Author(s):  
Elyta Widyaningrum ◽  
Ravi Y. Peters ◽  
Roderik C. Lindenbergh

Author(s):  
U. Drešček ◽  
M. Kosmatin Fras ◽  
A. Lisec ◽  
D. Grigillo

Abstract. Recently, building outline extraction from point cloud has gained momentum in particular in the context of 3D building modelling based on a data-driven approach, which has also been our motivation. For an accurate building outline extraction from a point cloud, various factors affecting the quality should be considered. In this research, we analysed the influence of point cloud density on the quality of the extracted building outlines. The input data was a classified photogrammetric point cloud, obtained from the dense image matching of images acquired by an optical sensor mounted on the unmanned aerial vehicle (UAV). For outline extraction, we selected two procedures, namely the direct approach and the raster approach. In the direct approach, building outlines are extracted directly from the points that have been classified as buildings. First, a convex hull with the alpha algorithm is estimated, which is further generalised with the Douglas-Peucker algorithm. This is followed by the shape regularisation to ensure perpendicular angles of the outline. In the raster approach, we first rasterised the building points and then extracted the building outlines using the Hough transform. In both approaches, the result is a roof outline in a 2D plane representing the maximum extent of the building above the surface. The building outlines were extracted from point clouds with five different densities. For both approaches, the quality assessment has shown that point cloud density has an impact on the building outline extraction, especially on the completeness of the outlines.


Author(s):  
Jingxuan Yang ◽  
Haotian Cui ◽  
Si Li ◽  
Sheng Gao ◽  
Jun Guo ◽  
...  

2018 ◽  
Vol 15 (11) ◽  
pp. 1775-1779 ◽  
Author(s):  
Xuebin Qin ◽  
Shida He ◽  
Xiucheng Yang ◽  
Masood Dehghan ◽  
Qiming Qin ◽  
...  

2018 ◽  
Vol 10 (8) ◽  
pp. 1195 ◽  
Author(s):  
Guangming Wu ◽  
Zhiling Guo ◽  
Xiaodan Shi ◽  
Qi Chen ◽  
Yongwei Xu ◽  
...  

The automatic extraction of building outlines from aerial imagery for the purposes of navigation and urban planning is a long-standing problem in the field of remote sensing. Currently, most methods utilize variants of fully convolutional networks (FCNs), which have significantly improved model performance for this task. However, pursuing more accurate segmentation results is still critical for additional applications, such as automatic mapping and building change detection. In this study, we propose a boundary regulated network called BR-Net, which utilizes both local and global information, to perform roof segmentation and outline extraction. The BR-Net method consists of a shared backend utilizing a modified U-Net and a multitask framework to generate predictions for segmentation maps and building outlines based on a consistent feature representation from the shared backend. Because of the restriction and regulation of additional boundary information, the proposed model can achieve superior performance compared to existing methods. Experiments on an aerial image dataset covering 32 km2 and containing more than 58,000 buildings indicate that our method performs well at both roof segmentation and outline extraction. The proposed BR-Net method significantly outperforms the classic FCN8s model. Compared to the state-of-the-art U-Net model, our BR-Net achieves 6.2% (0.869 vs. 0.818), 10.6% (0.772 vs. 0.698), and 8.7% (0.840 vs. 0.773) improvements in F1 score, Jaccard index, and kappa coefficient, respectively.


Sign in / Sign up

Export Citation Format

Share Document