Deep learning for the prediction and classification of land use and land cover changes using deep convolutional neural network

2021 ◽  
Vol 65 ◽  
pp. 101412
Author(s):  
J. Jagannathan ◽  
C. Divya
2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Brain tumor is a severe cancer disease caused by uncontrollable and abnormal partitioning of cells. Timely disease detection and treatment plans lead to the increased life expectancy of patients. Automated detection and classification of brain tumor are a more challenging process which is based on the clinician’s knowledge and experience. For this fact, one of the most practical and important techniques is to use deep learning. Recent progress in the fields of deep learning has helped the clinician’s in medical imaging for medical diagnosis of brain tumor. In this paper, we present a comparison of Deep Convolutional Neural Network models for automatically binary classification query MRI images dataset with the goal of taking precision tools to health professionals based on fined recent versions of DenseNet, Xception, NASNet-A, and VGGNet. The experiments were conducted using an MRI open dataset of 3,762 images. Other performance measures used in the study are the area under precision, recall, and specificity.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2398 ◽  
Author(s):  
Bin Xie ◽  
Hankui K. Zhang ◽  
Jie Xue

In classification of satellite images acquired over smallholder agricultural landscape with complex spectral profiles of various crop types, exploring image spatial information is important. The deep convolutional neural network (CNN), originally designed for natural image recognition in the computer vision field, can automatically explore high level spatial information and thus is promising for such tasks. This study tried to evaluate different CNN structures for classification of four smallholder agricultural landscapes in Heilongjiang, China using pan-sharpened 2 m GaoFen-1 (meaning high resolution in Chinese) satellite images. CNN with three pooling strategies: without pooling, with max pooling and with average pooling, were evaluated and compared with random forest. Two different numbers (~70,000 and ~290,000) of CNN learnable parameters were examined for each pooling strategy. The training and testing samples were systematically sampled from reference land cover maps to ensure sample distribution proportional to the reference land cover occurrence and included 60,000–400,000 pixels to ensure effective training. Testing sample classification results in the four study areas showed that the best pooling strategy was the average pooling CNN and that the CNN significantly outperformed random forest (2.4–3.3% higher overall accuracy and 0.05–0.24 higher kappa coefficient). Visual examination of CNN classification maps showed that CNN can discriminate better the spectrally similar crop types by effectively exploring spatial information. CNN was still significantly outperformed random forest using training samples that were evenly distributed among classes. Furthermore, future research to improve CNN performance was discussed.


2020 ◽  
Vol 12 (4) ◽  
pp. 698 ◽  
Author(s):  
Duo Jia ◽  
Changqing Song ◽  
Changxiu Cheng ◽  
Shi Shen ◽  
Lixin Ning ◽  
...  

Spatiotemporal fusion is considered a feasible and cost-effective way to solve the trade-off between the spatial and temporal resolution of satellite sensors. Recently proposed learning-based spatiotemporal fusion methods can address the prediction of both phenological and land-cover change. In this paper, we propose a novel deep learning-based spatiotemporal data fusion method that uses a two-stream convolutional neural network. The method combines both forward and backward prediction to generate a target fine image, where temporal change-based and a spatial information-based mapping are simultaneously formed, addressing the prediction of both phenological and land-cover changes with better generalization ability and robustness. Comparative experimental results for the test datasets with phenological and land-cover changes verified the effectiveness of our method. Compared to existing learning-based spatiotemporal fusion methods, our method is more effective in predicting phenological change and directly reconstructing the prediction with complete spatial details without the need for auxiliary modulation.


2018 ◽  
Vol 10 (12) ◽  
pp. 2053 ◽  
Author(s):  
Yunfeng Hu ◽  
Qianli Zhang ◽  
Yunzhi Zhang ◽  
Huimin Yan

Land cover and its dynamic information is the basis for characterizing surface conditions, supporting land resource management and optimization, and assessing the impacts of climate change and human activities. In land cover information extraction, the traditional convolutional neural network (CNN) method has several problems, such as the inability to be applied to multispectral and hyperspectral satellite imagery, the weak generalization ability of the model and the difficulty of automating the construction of a training database. To solve these problems, this study proposes a new type of deep convolutional neural network based on Landsat-8 Operational Land Imager (OLI) imagery. The network integrates cascaded cross-channel parametric pooling and average pooling layer, applies a hierarchical sampling strategy to realize the automatic construction of the training dataset, determines the technical scheme of model-related parameters, and finally performs the automatic classification of remote sensing images. This study used the new type of deep convolutional neural network to extract land cover information from Qinhuangdao City, Hebei Province, and compared the experimental results with those obtained by traditional methods. The results show that: (1) The proposed deep convolutional neural network (DCNN) model can automatically construct the training dataset and classify images. This model performs the classification of multispectral and hyperspectral satellite images using deep neural networks, which improves the generalization ability of the model and simplifies the application of the model. (2) The proposed DCNN model provides the best classification results in the Qinhuangdao area. The overall accuracy of the land cover data obtained is 82.0%, and the kappa coefficient is 0.76. The overall accuracy is improved by 5% and 14% compared to the support vector machine method and the maximum likelihood classification method, respectively.


2021 ◽  
Author(s):  
Chao-Hsin Chen ◽  
Kuo-Fong Tung ◽  
Wen-Chang Lin

AbstractBackgroundWith the advancement of NGS platform, large numbers of human variations and SNPs are discovered in human genomes. It is essential to utilize these massive nucleotide variations for the discovery of disease genes and human phenotypic traits. There are new challenges in utilizing such large numbers of nucleotide variants for polygenic disease studies. In recent years, deep-learning based machine learning approaches have achieved great successes in many areas, especially image classifications. In this preliminary study, we are exploring the deep convolutional neural network algorithm in genome-wide SNP images for the classification of human populations.ResultsWe have processed the SNP information from more than 2,500 samples of 1000 genome project. Five major human races were used for classification categories. We first generated SNP image graphs of chromosome 22, which contained about one million SNPs. By using the residual network (ResNet 50) pipeline in CNN algorithm, we have successfully obtained classification models to classify the validation dataset. F1 scores of the trained CNN models are 95 to 99%, and validation with additional separate 150 samples indicates a 95.8% accuracy of the CNN model. Misclassification was often observed between the American and European categories, which could attribute to the ancestral origins. We further attempted to use SNP image graphs in reduced color representations or images generated by spiral shapes, which also provided good prediction accuracy. We then tried to use the SNP image graphs from chromosome 20, almost all CNN models failed to classify the human race category successfully, except the African samples.ConclusionsWe have developed a human race prediction model with deep convolutional neural network. It is feasible to use the SNP image graph for the classification of individual genomes.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256500
Author(s):  
Maleika Heenaye-Mamode Khan ◽  
Nazmeen Boodoo-Jahangeer ◽  
Wasiimah Dullull ◽  
Shaista Nathire ◽  
Xiaohong Gao ◽  
...  

The real cause of breast cancer is very challenging to determine and therefore early detection of the disease is necessary for reducing the death rate due to risks of breast cancer. Early detection of cancer boosts increasing the survival chance up to 8%. Primarily, breast images emanating from mammograms, X-Rays or MRI are analyzed by radiologists to detect abnormalities. However, even experienced radiologists face problems in identifying features like micro-calcifications, lumps and masses, leading to high false positive and high false negative. Recent advancement in image processing and deep learning create some hopes in devising more enhanced applications that can be used for the early detection of breast cancer. In this work, we have developed a Deep Convolutional Neural Network (CNN) to segment and classify the various types of breast abnormalities, such as calcifications, masses, asymmetry and carcinomas, unlike existing research work, which mainly classified the cancer into benign and malignant, leading to improved disease management. Firstly, a transfer learning was carried out on our dataset using the pre-trained model ResNet50. Along similar lines, we have developed an enhanced deep learning model, in which learning rate is considered as one of the most important attributes while training the neural network. The learning rate is set adaptively in our proposed model based on changes in error curves during the learning process involved. The proposed deep learning model has achieved a performance of 88% in the classification of these four types of breast cancer abnormalities such as, masses, calcifications, carcinomas and asymmetry mammograms.


Sign in / Sign up

Export Citation Format

Share Document