scholarly journals DOMAIN ADAPTATION WITH CYCLEGAN FOR CHANGE DETECTION IN THE AMAZON FOREST

Author(s):  
P. J. Soto ◽  
G. A. O. P. Costa ◽  
R. Q. Feitosa ◽  
P. N. Happ ◽  
M. X. Ortega ◽  
...  

Abstract. Deep learning classification models require large amounts of labeled training data to perform properly, but the production of reference data for most Earth observation applications is a labor intensive, costly process. In that sense, transfer learning is an option to mitigate the demand for labeled data. In many remote sensing applications, however, the accuracy of a deep learning-based classification model trained with a specific dataset drops significantly when it is tested on a different dataset, even after fine-tuning. In general, this behavior can be credited to the domain shift phenomenon. In remote sensing applications, domain shift can be associated with changes in the environmental conditions during the acquisition of new data, variations of objects’ appearances, geographical variability and different sensor properties, among other aspects. In recent years, deep learning-based domain adaptation techniques have been used to alleviate the domain shift problem. Recent improvements in domain adaptation technology rely on techniques based on Generative Adversarial Networks (GANs), such as the Cycle-Consistent Generative Adversarial Network (CycleGAN), which adapts images across different domains by learning nonlinear mapping functions between the domains. In this work, we exploit the CycleGAN approach for domain adaptation in a particular change detection application, namely, deforestation detection in the Amazon forest. Experimental results indicate that the proposed approach is capable of alleviating the effects associated with domain shift in the context of the target application.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Venkata Dasu Marri ◽  
Veera Narayana Reddy P. ◽  
Chandra Mohan Reddy S.

Purpose Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image classification is a challenging task due to complexities associated with the images captured by satellites. Accurate image classification is highly essential in remote sensing applications. However, existing machine learning and deep learning–based classification methods could not provide desired accuracy. The purpose of this paper is to classify the objects in the satellite image with greater accuracy. Design/methodology/approach This paper proposes a deep learning-based automated method for classifying multispectral images. The central issue of this work is that data sets collected from public databases are first divided into a number of patches and their features are extracted. The features extracted from patches are then concatenated before a classification method is used to classify the objects in the image. Findings The performance of proposed modified velocity-based colliding bodies optimization method is compared with existing methods in terms of type-1 measures such as sensitivity, specificity, accuracy, net present value, F1 Score and Matthews correlation coefficient and type 2 measures such as false discovery rate and false positive rate. The statistical results obtained from the proposed method show better performance than existing methods. Originality/value In this work, multispectral image classification accuracy is improved with an optimization algorithm called modified velocity-based colliding bodies optimization.


2020 ◽  
pp. 35
Author(s):  
M. Campos-Taberner ◽  
F.J. García-Haro ◽  
B. Martínez ◽  
M.A. Gilabert

<p class="p1">The use of deep learning techniques for remote sensing applications has recently increased. These algorithms have proven to be successful in estimation of parameters and classification of images. However, little effort has been made to make them understandable, leading to their implementation as “black boxes”. This work aims to evaluate the performance and clarify the operation of a deep learning algorithm, based on a bi-directional recurrent network of long short-term memory (2-BiLSTM). The land use classification in the Valencian Community based on Sentinel-2 image time series in the framework of the common agricultural policy (CAP) is used as an example. It is verified that the accuracy of the deep learning techniques is superior (98.6 % overall success) to that other algorithms such as decision trees (DT), k-nearest neighbors (k-NN), neural networks (NN), support vector machines (SVM) and random forests (RF). The performance of the classifier has been studied as a function of time and of the predictors used. It is concluded that, in the study area, the most relevant information used by the network in the classification are the images corresponding to summer and the spectral and spatial information derived from the red and near infrared bands. These results open the door to new studies in the field of the explainable deep learning in remote sensing applications.</p>


2019 ◽  
Vol 152 ◽  
pp. 166-177 ◽  
Author(s):  
Lei Ma ◽  
Yu Liu ◽  
Xueliang Zhang ◽  
Yuanxin Ye ◽  
Gaofei Yin ◽  
...  

2021 ◽  
Vol 13 (7) ◽  
pp. 1246
Author(s):  
Kyle B. Larson ◽  
Aaron R. Tuor

Cheatgrass (Bromus tectorum) invasion is driving an emerging cycle of increased fire frequency and irreversible loss of wildlife habitat in the western US. Yet, detailed spatial information about its occurrence is still lacking for much of its presumably invaded range. Deep learning (DL) has demonstrated success for remote sensing applications but is less tested on more challenging tasks like identifying biological invasions using sub-pixel phenomena. We compare two DL architectures and the more conventional Random Forest and Logistic Regression methods to improve upon a previous effort to map cheatgrass occurrence at >2% canopy cover. High-dimensional sets of biophysical, MODIS, and Landsat-7 ETM+ predictor variables are also compared to evaluate different multi-modal data strategies. All model configurations improved results relative to the case study and accuracy generally improved by combining data from both sensors with biophysical data. Cheatgrass occurrence is mapped at 30 m ground sample distance (GSD) with an estimated 78.1% accuracy, compared to 250-m GSD and 71% map accuracy in the case study. Furthermore, DL is shown to be competitive with well-established machine learning methods in a limited data regime, suggesting it can be an effective tool for mapping biological invasions and more broadly for multi-modal remote sensing applications.


2021 ◽  
Vol 13 (13) ◽  
pp. 2482
Author(s):  
Pedro Zamboni ◽  
José Marcato Junior ◽  
Jonathan de Andrade Silva ◽  
Gabriela Takahashi Miyoshi ◽  
Edson Takashi Matsubara ◽  
...  

Urban forests contribute to maintaining livability and increase the resilience of cities in the face of population growth and climate change. Information about the geographical distribution of individual trees is essential for the proper management of these systems. RGB high-resolution aerial images have emerged as a cheap and efficient source of data, although detecting and mapping single trees in an urban environment is a challenging task. Thus, we propose the evaluation of novel methods for single tree crown detection, as most of these methods have not been investigated in remote sensing applications. A total of 21 methods were investigated, including anchor-based (one and two-stage) and anchor-free state-of-the-art deep-learning methods. We used two orthoimages divided into 220 non-overlapping patches of 512 × 512 pixels with a ground sample distance (GSD) of 10 cm. The orthoimages were manually annotated, and 3382 single tree crowns were identified as the ground-truth. Our findings show that the anchor-free detectors achieved the best average performance with an AP50 of 0.686. We observed that the two-stage anchor-based and anchor-free methods showed better performance for this task, emphasizing the FSAF, Double Heads, CARAFE, ATSS, and FoveaBox models. RetinaNet, which is currently commonly applied in remote sensing, did not show satisfactory performance, and Faster R-CNN had lower results than the best methods but with no statistically significant difference. Our findings contribute to a better understanding of the performance of novel deep-learning methods in remote sensing applications and could be used as an indicator of the most suitable methods in such applications.


Sign in / Sign up

Export Citation Format

Share Document