Convolutional Neural Networks for Water Body Extraction from Landsat Imagery

Author(s):  
Long Yu ◽  
Zhiyin Wang ◽  
Shengwei Tian ◽  
Feiyue Ye ◽  
Jianli Ding ◽  
...  

Traditional machine learning methods for water body extraction need complex spectral analysis and feature selection which rely on wealth of prior knowledge. They are time-consuming and hard to satisfy our request for accuracy, automation level and a wide range of application. We present a novel deep learning framework for water body extraction from Landsat imagery considering both its spectral and spatial information. The framework is a hybrid of convolutional neural networks (CNN) and logistic regression (LR) classifier. CNN, one of the deep learning methods, has acquired great achievements on various visual-related tasks. CNN can hierarchically extract deep features from raw images directly, and distill the spectral–spatial regularities of input data, thus improving the classification performance. Experimental results based on three Landsat imagery datasets show that our proposed model achieves better performance than support vector machine (SVM) and artificial neural network (ANN).

2016 ◽  
Vol 21 (9) ◽  
pp. 998-1003 ◽  
Author(s):  
Oliver Dürr ◽  
Beate Sick

Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening–based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.


Author(s):  
Ruopeng Xie ◽  
Jiahui Li ◽  
Jiawei Wang ◽  
Wei Dai ◽  
André Leier ◽  
...  

Abstract Virulence factors (VFs) enable pathogens to infect their hosts. A wealth of individual, disease-focused studies has identified a wide variety of VFs, and the growing mass of bacterial genome sequence data provides an opportunity for computational methods aimed at predicting VFs. Despite their attractive advantages and performance improvements, the existing methods have some limitations and drawbacks. Firstly, as the characteristics and mechanisms of VFs are continually evolving with the emergence of antibiotic resistance, it is more and more difficult to identify novel VFs using existing tools that were previously developed based on the outdated data sets; secondly, few systematic feature engineering efforts have been made to examine the utility of different types of features for model performances, as the majority of tools only focused on extracting very few types of features. By addressing the aforementioned issues, the accuracy of VF predictors can likely be significantly improved. This, in turn, would be particularly useful in the context of genome wide predictions of VFs. In this work, we present a deep learning (DL)-based hybrid framework (termed DeepVF) that is utilizing the stacking strategy to achieve more accurate identification of VFs. Using an enlarged, up-to-date dataset, DeepVF comprehensively explores a wide range of heterogeneous features with popular machine learning algorithms. Specifically, four classical algorithms, including random forest, support vector machines, extreme gradient boosting and multilayer perceptron, and three DL algorithms, including convolutional neural networks, long short-term memory networks and deep neural networks are employed to train 62 baseline models using these features. In order to integrate their individual strengths, DeepVF effectively combines these baseline models to construct the final meta model using the stacking strategy. Extensive benchmarking experiments demonstrate the effectiveness of DeepVF: it achieves a more accurate and stable performance compared with baseline models on the benchmark dataset and clearly outperforms state-of-the-art VF predictors on the independent test. Using the proposed hybrid ensemble model, a user-friendly online predictor of DeepVF (http://deepvf.erc.monash.edu/) is implemented. Furthermore, its utility, from the user’s viewpoint, is compared with that of existing toolkits. We believe that DeepVF will be exploited as a useful tool for screening and identifying potential VFs from protein-coding gene sequences in bacterial genomes.


2019 ◽  
Vol 11 (2) ◽  
pp. 196 ◽  
Author(s):  
Omid Ghorbanzadeh ◽  
Thomas Blaschke ◽  
Khalil Gholamnia ◽  
Sansar Meena ◽  
Dirk Tiede ◽  
...  

There is a growing demand for detailed and accurate landslide maps and inventories around the globe, but particularly in hazard-prone regions such as the Himalayas. Most standard mapping methods require expert knowledge, supervision and fieldwork. In this study, we use optical data from the Rapid Eye satellite and topographic factors to analyze the potential of machine learning methods, i.e., artificial neural network (ANN), support vector machines (SVM) and random forest (RF), and different deep-learning convolution neural networks (CNNs) for landslide detection. We use two training zones and one test zone to independently evaluate the performance of different methods in the highly landslide-prone Rasuwa district in Nepal. Twenty different maps are created using ANN, SVM and RF and different CNN instantiations and are compared against the results of extensive fieldwork through a mean intersection-over-union (mIOU) and other common metrics. This accuracy assessment yields the best result of 78.26% mIOU for a small window size CNN, which uses spectral information only. The additional information from a 5 m digital elevation model helps to discriminate between human settlements and landslides but does not improve the overall classification accuracy. CNNs do not automatically outperform ANN, SVM and RF, although this is sometimes claimed. Rather, the performance of CNNs strongly depends on their design, i.e., layer depth, input window sizes and training strategies. Here, we conclude that the CNN method is still in its infancy as most researchers will either use predefined parameters in solutions like Google TensorFlow or will apply different settings in a trial-and-error manner. Nevertheless, deep-learning can improve landslide mapping in the future if the effects of the different designs are better understood, enough training samples exist, and the effects of augmentation strategies to artificially increase the number of existing samples are better understood.


2020 ◽  
Vol 143 ◽  
pp. 02015
Author(s):  
Li Zherui ◽  
Cai Huiwen

Sea ice classification is one of the important tasks of sea ice monitoring. Accurate extraction of sea ice types is of great significance on sea ice conditions assessment, smooth navigation and safty marine operations. Sentinel-2 is an optical satellite launched by the European Space Agency. High spatial resolution and wide range imaging provide powerful support for sea ice monitoring. However, traditional supervised classification method is difficult to achieve fine results for small sample features. In order to solve the problem, this paper proposed a sea ice extraction method based on deep learning and it was applied to Liaodong Bay in Bohai Sea, China. The convolutional neural network was used to extract and classify the feature of the image from Sentinel-2. The results showed that the overall accuracy of the algorithm was 85.79% which presented a significant improvement compared with the tranditional algorithms, such as minimum distance method, maximum likelihood method, Mahalanobis distance method, and support vector machine method. The method proposed in this paper, which combines convolutional neural networks and high-resolution multispectral data, provides a new idea for remote sensing monitoring of sea ice.


Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 190 ◽  
Author(s):  
Zhiwei Huang ◽  
Jinzhao Lin ◽  
Liming Xu ◽  
Huiqian Wang ◽  
Tong Bai ◽  
...  

The application of deep convolutional neural networks (CNN) in the field of medical image processing has attracted extensive attention and demonstrated remarkable progress. An increasing number of deep learning methods have been devoted to classifying ChestX-ray (CXR) images, and most of the existing deep learning methods are based on classic pretrained models, trained by global ChestX-ray images. In this paper, we are interested in diagnosing ChestX-ray images using our proposed Fusion High-Resolution Network (FHRNet). The FHRNet concatenates the global average pooling layers of the global and local feature extractors—it consists of three branch convolutional neural networks and is fine-tuned for thorax disease classification. Compared with the results of other available methods, our experimental results showed that the proposed model yields a better disease classification performance for the ChestX-ray 14 dataset, according to the receiver operating characteristic curve and area-under-the-curve score. An ablation study further confirmed the effectiveness of the global and local branch networks in improving the classification accuracy of thorax diseases.


2020 ◽  
Author(s):  
Muhammad Awais ◽  
Xi Long ◽  
Bin Yin ◽  
Chen chen ◽  
Saeed Akbarzadeh ◽  
...  

Abstract Objective: In this paper, we propose to evaluate the use of a pre-trained convolutional neural networks (CNNs) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNNs as feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally very expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNNs, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet. Results: From around 2-h Fluke® video recording of seven neonate, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future a dedicated neural network trained on neonatal data or a transfer learning approach is required.


2020 ◽  
Author(s):  
Hryhorii Chereda ◽  
Annalen Bleckmann ◽  
Kerstin Menck ◽  
Júlia Perera-Bel ◽  
Philip Stegmaier ◽  
...  

AbstractMotivationContemporary deep learning approaches show cutting-edge performance in a variety of complex prediction tasks. Nonetheless, the application of deep learning in healthcare remains limited since deep learning methods are often considered as non-interpretable black-box models. Layer-wise Relevance Propagation (LRP) is a technique to explain decisions of deep learning methods. It is widely used to interpret Convolutional Neural Networks (CNNs) applied on image data. Recently, CNNs started to extend towards non-euclidean domains like graphs. Molecular networks are commonly represented as graphs detailing interactions between molecules. Gene expression data can be assigned to the vertices of these graphs. In other words, gene expression data can be structured by utilizing molecular network information as prior knowledge. Graph-CNNs can be applied to structured gene expression data, for example, to predict metastatic events in breast cancer. Therefore, there is a need for explanations showing which part of a molecular network is relevant for predicting an event, e.g. distant metastasis in cancer, for each individual patient.ResultsWe extended the procedure of LRP to make it available for Graph-CNN and tested its applicability on a large breast cancer dataset. We present Graph Layer-wise Relevance Propagation (GLRP) as a new method to explain the decisions made by Graph-CNNs. We demonstrate a sanity check of the developed GLRP on a hand-written digits dataset, and then applied the method on gene expression data. We show that GLRP provides patient-specific molecular subnetworks that largely agree with clinical knowledge and identify common as well as novel, and potentially druggable, drivers of tumor progression. As a result this method could be potentially highly useful on interpreting classification results on the individual patient level, as for example in precision medicine approaches or a molecular tumor board.Availabilityhttps://gitlab.gwdg.de/UKEBpublic/graph-lrphttps://frankkramer-lab.github.io/MetaRelSubNetVis/[email protected]


2021 ◽  
Vol 5 (3) ◽  
pp. 584-593
Author(s):  
Naufal Hilmiaji ◽  
Kemas Muslim Lhaksmana ◽  
Mahendra Dwifebri Purbolaksono

especially with the advancement of deep learning methods for text classification. Despite some effort to identify emotion on Indonesian tweets, its performance evaluation results have not achieved acceptable numbers. To solve this problem, this paper implements a classification model using a convolutional neural network (CNN), which has demonstrated expected performance in text classification. To easily compare with the previous research, this classification is performed on the same dataset, which consists of 4,403 tweets in Indonesian that were labeled using five different emotion classes: anger, fear, joy, love, and sadness. The performance evaluation results achieve the precision, recall, and F1-score at respectively 90.1%, 90.3%, and 90.2%, while the highest accuracy achieves 89.8%. These results outperform previous research that classifies the same classification on the same dataset.


2021 ◽  
pp. 1-11
Author(s):  
Tianshi Mu ◽  
Kequan Lin ◽  
Huabing Zhang ◽  
Jian Wang

Deep learning is gaining significant traction in a wide range of areas. Whereas, recent studies have demonstrated that deep learning exhibits the fatal weakness on adversarial examples. Due to the black-box nature and un-transparency problem of deep learning, it is difficult to explain the reason for the existence of adversarial examples and also hard to defend against them. This study focuses on improving the adversarial robustness of convolutional neural networks. We first explore how adversarial examples behave inside the network through visualization. We find that adversarial examples produce perturbations in hidden activations, which forms an amplification effect to fool the network. Motivated by this observation, we propose an approach, termed as sanitizing hidden activations, to help the network correctly recognize adversarial examples by eliminating or reducing the perturbations in hidden activations. To demonstrate the effectiveness of our approach, we conduct experiments on three widely used datasets: MNIST, CIFAR-10 and ImageNet, and also compare with state-of-the-art defense techniques. The experimental results show that our sanitizing approach is more generalized to defend against different kinds of attacks and can effectively improve the adversarial robustness of convolutional neural networks.


Sign in / Sign up

Export Citation Format

Share Document