Deep Learning - a New Approach for Multi-Label Scene Classification in Planetscope and Sentinel-2 Imagery

Author(s):  
Iurii Shendryk ◽  
Yannik Rist ◽  
Rob Lucas ◽  
Peter Thorburn ◽  
Catherine Ticehurst
2021 ◽  
Vol 13 (22) ◽  
pp. 4698
Author(s):  
Hejar Shahabi ◽  
Maryam Rahimzad ◽  
Sepideh Tavakkoli Piralilou ◽  
Omid Ghorbanzadeh ◽  
Saied Homayouni ◽  
...  

This paper proposes a new approach based on an unsupervised deep learning (DL) model for landslide detection. Recently, supervised DL models using convolutional neural networks (CNN) have been widely studied for landslide detection. Even though these models provide robust performance and reliable results, they depend highly on a large labeled dataset for their training step. As an alternative, in this paper, we developed an unsupervised learning model by employing a convolutional auto-encoder (CAE) to deal with the problem of limited labeled data for training. The CAE was used to learn and extract the abstract and high-level features without using training data. To assess the performance of the proposed approach, we used Sentinel-2 imagery and a digital elevation model (DEM) to map landslides in three different case studies in India, China, and Taiwan. Using minimum noise fraction (MNF) transformation, we reduced the multispectral dimension to three features containing more than 80% of scene information. Next, these features were stacked with slope data and NDVI as inputs to the CAE model. The Huber reconstruction loss was used to evaluate the inputs. We achieved reconstruction losses ranging from 0.10 to 0.147 for the MNF features, slope, and NDVI stack for all three study areas. The mini-batch K-means clustering method was used to cluster the features into two to five classes. To evaluate the impact of deep features on landslide detection, we first clustered a stack of MNF features, slope, and NDVI, then the same ones plus with the deep features. For all cases, clustering based on deep features provided the highest precision, recall, F1-score, and mean intersection over the union in landslide detection.


2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 13 (11) ◽  
pp. 2220
Author(s):  
Yanbing Bai ◽  
Wenqi Wu ◽  
Zhengxin Yang ◽  
Jinze Yu ◽  
Bo Zhao ◽  
...  

Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Stefan Gerlach ◽  
Christoph Fürweger ◽  
Theresa Hofmann ◽  
Alexander Schlaefer

AbstractAlthough robotic radiosurgery offers a flexible arrangement of treatment beams, generating treatment plans is computationally challenging and a time consuming process for the planner. Furthermore, different clinical goals have to be considered during planning and generally different sets of beams correspond to different clinical goals. Typically, candidate beams sampled from a randomized heuristic form the basis for treatment planning. We propose a new approach to generate candidate beams based on deep learning using radiological features as well as the desired constraints. We demonstrate that candidate beams generated for specific clinical goals can improve treatment plan quality. Furthermore, we compare two approaches to include information about constraints in the prediction. Our results show that CNN generated beams can improve treatment plan quality for different clinical goals, increasing coverage from 91.2 to 96.8% for 3,000 candidate beams on average. When including the clinical goal in the training, coverage is improved by 1.1% points.


2021 ◽  
Vol 54 (1) ◽  
pp. 182-208
Author(s):  
Sani M. Isa ◽  
Suharjito ◽  
Gede Putera Kusuma ◽  
Tjeng Wawan Cenggoro
Keyword(s):  

2021 ◽  
Vol 13 (13) ◽  
pp. 7044
Author(s):  
Dawei Wen ◽  
Song Ma ◽  
Anlu Zhang ◽  
Xinli Ke

Assessment of ecosystem services supply, demand, and budgets can help to achieve sustainable urban development. The Guangdong-Hong Kong-Macao Greater Bay Area, as one of the most developed megacities in China, sets up a goal of high-quality development while fostering ecosystem services. Therefore, assessing the ecosystem services in this study area is very important to guide further development. However, the spatial pattern of ecosystem services, especially at local scales, is not well understood. Using the available 2017 land cover product, Sentinel-1 SAR and Sentinel-2 optical images, a deep learning land cover mapping framework integrating deep change vector analysis and the ResUnet model was proposed. Based on the produced 10 m land cover map for the year 2020, recent spatial patterns of the ecosystem services at different scales (i.e., the GBA, 11 cities, urban–rural gradient, and pixel) were analyzed. The results showed that: (1) Forest was the primary land cover in Guangzhou, Huizhou, Shenzhen, Zhuhai, Jiangmen, Zhaoqing, and Hong Kong, and an impervious surface was the main land cover in the other four cities. (2) Although ecosystem services in the GBA were sufficient to meet their demand, there was undersupply for all the three general services in Macao and for the provision services in Zhongshan, Dongguan, Shenzhen, and Foshan. (3) Along the urban–rural gradient in the GBA, supply and demand capacity showed an increasing and decreasing trend, respectively. As for the city-level analysis, Huizhou and Zhuhai showed a fluctuation pattern while Jiangmen, Zhaoqing, and Hong Kong presented a decreasing pattern along the gradient. (4) Inclusion of neighborhood landscape led to increased demand scores in a small proportion of impervious areas and oversupply for a very large percent of bare land.


Author(s):  
Hang Yang ◽  
Toru Kouyama ◽  
Fumiharu Suzuki ◽  
Shutaro Sato ◽  
Ichiro Yoshikawa
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document