scholarly journals EVALUATION OF SEMI-SUPERVISED LEARNING FOR CNN-BASED CHANGE DETECTION

Author(s):  
E. Bousias Alexakis ◽  
C. Armenakis

Abstract. Over the past few years, many research works have utilized Convolutional Neural Networks (CNN) in the development of fully automated change detection pipelines from high resolution satellite imagery. Even though CNN architectures can achieve state-of-the-art results in a wide variety of vision tasks, including change detection applications, they require extensive amounts of labelled training examples in order to be able to generalize to new data through supervised learning. In this work we experiment with the implementation of a semi-supervised training approach in an attempt to improve the image semantic segmentation performance of models trained using a small number of labelled image pairs by leveraging information from additional unlabelled image samples. The approach is based on the Mean Teacher method, a semi-supervised approach, successfully applied for image classification and for sematic segmentation of medical images. Mean Teacher uses an exponential moving average of the model weights from previous epochs to check the consistency of the model’s predictions under various perturbations. Our goal is to examine whether its application in a change detection setting can result in analogous performance improvements. The preliminary results of the proposed method appear to be compatible to the results of the traditional fully supervised training. Research is continuing towards fine-tuning of the method and reaching solid conclusions with respect to the potential benefits of the semi-supervised learning approaches in image change detection applications.

2020 ◽  
Vol 12 (17) ◽  
pp. 2669
Author(s):  
Junhao Qian ◽  
Min Xia ◽  
Yonghong Zhang ◽  
Jia Liu ◽  
Yiqing Xu

Change detection is a very important technique for remote sensing data analysis. Its mainstream solutions are either supervised or unsupervised. In supervised methods, most of the existing change detection methods using deep learning are related to semantic segmentation. However, these methods only use deep learning models to process the global information of an image but do not carry out specific trainings on changed and unchanged areas. As a result, many details of local changes could not be detected. In this work, a trilateral change detection network is proposed. The proposed network has three branches (a main module and two auxiliary modules, all of them are composed of convolutional neural networks (CNNs)), which focus on the overall information of bitemporal Google Earth image pairs, the changed areas and the unchanged areas, respectively. The proposed method is end-to-end trainable, and each component in the network does not need to be trained separately.


2020 ◽  
Vol 12 (21) ◽  
pp. 3603 ◽  
Author(s):  
Jiaxin Wang ◽  
Chris H. Q. Ding ◽  
Sibao Chen ◽  
Chenggang He ◽  
Bin Luo

Image segmentation has made great progress in recent years, but the annotation required for image segmentation is usually expensive, especially for remote sensing images. To solve this problem, we explore semi-supervised learning methods and appropriately utilize a large amount of unlabeled data to improve the performance of remote sensing image segmentation. This paper proposes a method for remote sensing image segmentation based on semi-supervised learning. We first design a Consistency Regularization (CR) training method for semi-supervised training, then employ the new learned model for Average Update of Pseudo-label (AUP), and finally combine pseudo labels and strong labels to train semantic segmentation network. We demonstrate the effectiveness of the proposed method on three remote sensing datasets, achieving better performance without more labeled data. Extensive experiments show that our semi-supervised method can learn the latent information from the unlabeled data to improve the segmentation performance.


2021 ◽  
Vol 13 (20) ◽  
pp. 4083
Author(s):  
Antonio Di Pilato ◽  
Nicolò Taggio ◽  
Alexis Pompili ◽  
Michele Iacobellis ◽  
Adriano Di Florio ◽  
...  

The interest in change detection in the field of remote sensing has increased in the last few years. Searching for changes in satellite images has many useful applications, ranging from land cover and land use analysis to anomaly detection. In particular, urban change detection provides an efficient tool to study urban spread and growth through several years of observation. At the same time, change detection is often a computationally challenging and time-consuming task; therefore, a standard approach with manual detection of the elements of interest by experts in the domain of Earth Observation needs to be replaced by innovative methods that can guarantee optimal results with unquestionable value and within reasonable time. In this paper, we present two different approaches to change detection (semantic segmentation and classification) that both exploit convolutional neural networks to address these particular needs, which can be further refined and used in post-processing workflows for a large variety of applications.


2020 ◽  
Author(s):  
Ruiyi Zhang ◽  
Yunan Luo ◽  
Jianzhu Ma ◽  
Ming Zhang ◽  
Sheng Wang

ABSTRACTRapidly generated scRNA-seq datasets enable us to understand cellular differences and the function of each individual cell at single-cell resolution. Cell type classification, which aims at characterizing and labeling groups of cells according to their gene expression, is one of the most important steps for single-cell analysis. To facilitate the manual curation process, supervised learning methods have been used to automatically classify cells. Most of the existing supervised learning approaches only utilize annotated cells in the training step while ignoring the more abundant unannotated cells. In this paper, we proposed scPretrain, a multi-task self-supervised learning approach that jointly considers annotated and unannotated cells for cell type classification. scPretrain consists of a pre-training step and a fine-tuning step. In the pre-training step, scPretrain uses a multi-task learning framework to train a feature extraction encoder based on each dataset’s pseudo-labels, where only unannotated cells are used. In the fine-tuning step, scPretrain fine-tunes this feature extraction encoder using the limited annotated cells in a new dataset. We evaluated scPretrain on 60 diverse datasets from different technologies, species and organs, and obtained a significant improvement on both cell type classification and cell clustering. Moreover, the representations obtained by scPretrain in the pre-training step also enhanced the performance of conventional classifiers such as random forest, logistic regression and support vector machines. scPretrain is able to effectively utilize the massive amount of unlabelled data and be applied to annotating increasingly generated scRNA-seq datasets.Availabilityhttps://github.com/ruiyi-zhang/scPretrain\


2021 ◽  
Author(s):  
Ranpreet Kaur ◽  
Hamid GholamHosseini ◽  
Roopak Sinha

Abstract Background: Among skin cancers, melanoma is the most dangerous and aggressive form, exhibiting a high mortality rate worldwide. Biopsy and histopatholog-ical analysis are common procedures for skin cancer detection and prevention in clinical settings. A significant step involved in the diagnosis process is the deep understanding of patterns, size, color, and structure of lesions based on images obtained through dermatoscopes for the infected area. However, the manual seg-mentation of the lesion region is time-consuming because the lesion evolves and changes its shape over time which makes its prediction challenging. Moreover, at the initial stage, it is difficult to predict melanoma as it closely resembles other skin cancer types that are not malignant as melanoma, thus automatic segmentation techniques are required to design a computer-aided system for accurate and timely detection. Methods: As deep learning approaches have gained high attention in recent years due to their remarkable performance, therefore, in this work, we proposed a novel, end-to-end atrous spatial pyramid pooling based convolutional neural network (CNN) framework for automatic lesion segmentation. This architecture is built based on the concept of atrous dilated convolutions which are effective for semantic segmentation. A dense deep neural network is designed using several building blocks consisting of convolutional, batch normalization, leaky ReLU layer with fine-tuning of hyperparameters contributing towards higher performance. Conclusion: The network was tested using three benchmark datasets by International Skin Imaging Collaboration, i.e. ISIC 2016, ISIC 2017, and ISIC 2018. The experimental results showed that the proposed network achieved an average Jac-card index of 86.5% on ISIC 2016, 81.2% on ISIC 2017, and 81.2% on ISIC 2018 datasets, respectively which is recorded as higher than the top three winners of the ISIC challenge. Also, the model successfully extracts lesions from the whole image in one pass, requiring no pre-processing process. The conclusions yielded that network is accurate in performing lesion segmentation on skin cancer images.


2021 ◽  
Vol 13 (17) ◽  
pp. 3394 ◽  
Author(s):  
Le Yang ◽  
Yiming Chen ◽  
Shiji Song ◽  
Fan Li ◽  
Gao Huang

Although considerable success has been achieved in change detection on optical remote sensing images, accurate detection of specific changes is still challenging. Due to the diversity and complexity of the ground surface changes and the increasing demand for detecting changes that require high-level semantics, we have to resort to deep learning techniques to extract the intrinsic representations of changed areas. However, one key problem for developing deep learning metho for detecting specific change areas is the limitation of annotated data. In this paper, we collect a change detection dataset with 862 labeled image pairs, where the urban construction-related changes are labeled. Further, we propose a supervised change detection method based on a deep siamese semantic segmentation network to handle the proposed data effectively. The novelty of the method is that the proposed siamese network treats the change detection problem as a binary semantic segmentation task and learns to extract features from the image pairs directly. The siamese architecture as well as the elaborately designed semantic segmentation networks significantly improve the performance on change detection tasks. Experimental results demonstrate the promising performance of the proposed network compared to existing approaches.


2020 ◽  
Vol 2020 (8) ◽  
pp. 114-1-114-7
Author(s):  
Bryan Blakeslee ◽  
Andreas Savakis

Change detection in image pairs has traditionally been a binary process, reporting either “Change” or “No Change.” In this paper, we present LambdaNet, a novel deep architecture for performing pixel-level directional change detection based on a four class classification scheme. LambdaNet successfully incorporates the notion of “directional change” and identifies differences between two images as “Additive Change” when a new object appears, “Subtractive Change” when an object is removed, “Exchange” when different objects are present in the same location, and “No Change.” To obtain pixel annotated change maps for training, we generated directional change class labels for the Change Detection 2014 dataset. Our tests illustrate that LambdaNet would be suitable for situations where the type of change is unstructured, such as change detection scenarios in satellite imagery.


Sign in / Sign up

Export Citation Format

Share Document