Co-evolution-based parameter learning for remote sensing scene classification

Author(s):  
Di Zhang ◽  
Yichen Zhou ◽  
Jiaqi Zhao ◽  
Yong Zhou

The appropriate setting of hyperparameter is a key factor to determine the performance of the deep learning model. Efficient hyperparametric optimization algorithm can not only improve the efficiency and speed of model hyperparametric optimization, but also reduce the application threshold of deep learning model. Therefore, we propose a parameter learning algorithm-based co-evolutionary for remote sensing scene classification. First, a co-evolution framework is proposed to optimize the optimizer’s hyperparameters and weight parameters of the convolutional neural networks (CNNs) simultaneously. Second, with the strategy of co-evolution with two populations, the hyperparameters can learn within the population and the weights of CNN can be updated by utilizing information between the populations. Finally, the parallel computing mechanism is adapted to speed up the learning process, as the two populations can evolve simultaneously. Extensive experiments on three public datasets demonstrate the effectiveness of the proposed approach.

2021 ◽  
Vol 11 (24) ◽  
pp. 11659
Author(s):  
Sheng-Chieh Hung ◽  
Hui-Ching Wu ◽  
Ming-Hseng Tseng

Through the continued development of technology, applying deep learning to remote sensing scene classification tasks is quite mature. The keys to effective deep learning model training are model architecture, training strategies, and image quality. From previous studies of the author using explainable artificial intelligence (XAI), image cases that have been incorrectly classified can be improved when the model has adequate capacity to correct the classification after manual image quality correction; however, the manual image quality correction process takes a significant amount of time. Therefore, this research integrates technologies such as noise reduction, sharpening, partial color area equalization, and color channel adjustment to evaluate a set of automated strategies for enhancing image quality. These methods can enhance details, light and shadow, color, and other image features, which are beneficial for extracting image features from the deep learning model to further improve the classification efficiency. In this study, we demonstrate that the proposed image quality enhancement strategy and deep learning techniques can effectively improve the scene classification performance of remote sensing images and outperform previous state-of-the-art approaches.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Author(s):  
Jae-Seung Yun ◽  
Jaesik Kim ◽  
Sang-Hyuk Jung ◽  
Seon-Ah Cha ◽  
Seung-Hyun Ko ◽  
...  

Objective: We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. Research Design and Methods: The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. Results: When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. Conclusions: Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.


Author(s):  
Amit Doegar ◽  
◽  
Maitreyee Dutta ◽  
Gaurav Kumar ◽  
◽  
...  

In the present scenario, one of the threats of trust on images for digital and online applications as well as on social media. Individual’s reputation can be turnish using misinformation or manipulation in the digital images. Image forgery detection is an approach for detection and localization of forged components in the image which is manipulated. For effective image forgery detection, an adequate number of features are required which can be accomplished by a deep learning model, which does not require manual feature engineering or handcraft feature approaches. In this paper we have implemented GoogleNet deep learning model to extract the image features and employ Random Forest machine learning algorithm to detect whether the image is forged or not. The proposed approach is implemented on the publicly available benchmark dataset MICC-F220 with k-fold cross validation approach to split the dataset into training and testing dataset and also compared with the state-of-the-art approaches.


2021 ◽  
Vol 11 (16) ◽  
pp. 7355
Author(s):  
Zhiheng Xu ◽  
Xiong Ding ◽  
Kun Yin ◽  
Ziyue Li ◽  
Joan A. Smyth ◽  
...  

Tick species are considered the second leading vector of human diseases. Different ticks can transmit a variety of pathogens that cause various tick-borne diseases (TBD), such as Lyme disease. Currently, it remains a challenge to diagnose Lyme disease because of its non-specific symptoms. Rapid and accurate identification of tick species plays an important role in predicting potential disease risk for tick-bitten patients, and ensuring timely and effective treatment. Here, we developed, optimized, and tested a smartphone-based deep learning algorithm (termed “TickPhone app”) for tick identification. The deep learning model was trained by more than 2000 tick images and optimized by different parameters, including normal sizes of images, deep learning architectures, image styles, and training–testing dataset distributions. The optimized deep learning model achieved a training accuracy of ~90% and a validation accuracy of ~85%. The TickPhone app was used to identify 31 independent tick species and achieved an accuracy of 95.69%. Such a simple and easy-to-use TickPhone app showed great potential to estimate epidemiology and risk of tick-borne disease, help health care providers better predict potential disease risk for tick-bitten patients, and ultimately enable timely and effective medical treatment for patients.


2018 ◽  
Vol 36 (4_suppl) ◽  
pp. 266-266
Author(s):  
Sunyoung S. Lee ◽  
Jin Cheon Kim ◽  
Jillian Dolan ◽  
Andrew Baird

266 Background: The characteristic histological feature of pancreatic adenocarcinoma (PAD) is extensive desmoplasia alongside leukocytes and cancer-associated fibroblasts. Desmoplasia is a known barrier to the absorption and penetration of therapeutic drugs. Stromal cells are key elements for a clinical response to chemotherapy and immunotherapy, but few models exist to analyze the spatial and architectural elements that compose the complex tumor microenvironment in PAD. Methods: We created a deep learning algorithm to analyze images and quantify cells and fibrotic tissue. Histopathology slides of PAD patients (pts) were then used to automate the recognition and mapping of adenocarcinoma cells, leukocytes, fibroblasts, and degree of desmoplasia, defined as the ratio of the area of fibrosis to that of the tumor gland. This information was correlated with mutational burden, defined as mutations (mts) per megabase (mb) of each pt. Results: The histopathology slides (H&E stain) of 126 pts were obtained from The Cancer Genome Atlas (TCGA) and analyzed with the deep learning model. Pt with the largest mutational burden (733 mts/mb, n = 1 pt) showed the largest number of leukocytes (585/mm2). Those with the smallest mutational burden (0 mts/mb, n = 16 pts) showed the fewest leukocytes (median, 14/mm2). Mutational burden was linearly proportional to the number of leukocytes (R2 of 0.7772). The pt with a mutational burden of 733 was excluded as an outlier. No statistically significant difference in the number of fibroblasts, degree of desmoplasia, or thickness of the first fibrotic layer (the smooth muscle actin-rich layer outside of the tumor gland), was found among pts of varying mutational burden. The median distance from a tumor gland to a leukocyte was inversely proportional to the number of leukocytes in a box of 1 mm2 with a tumor gland at the center. Conclusions: A deep learning model enabled automated quantification and mapping of desmoplasia, stromal and malignant cells, revealing the spatial and architectural relationship of these cells in PAD pts with varying mutational burdens. Further biomarker driven studies in the context of immunotherapy and anti-fibrosis are warranted.


Sign in / Sign up

Export Citation Format

Share Document