Image Classification Of Infected Potato Leaves Using Deep CNN Transfer Learning

Author(s):  
Yohanes Eudes Hugo Maur ◽  
Djoko Budiyanto Setyohadi
2020 ◽  
Vol 8 (6) ◽  
pp. 2016-2019

The focus of the paper is to classify the images into tumorous and non-tumorous and then locate the tumor. Amongst many medical imaging applications segmentation of Brain Tumors is an important and arduous task as the data acquired is disrupted due to artifacts being produced and acquisition time being very less, so classifying and finding the exact location of tumor is one of the most important jobs. In the paper, deep learning specifically the convolutional neural network is used to demonstrate its potential for image classification task. As the learning from available dataset will be low, so we use transfer learning [4] approach, as it is a developing AI strategy that overwhelms with the best outcomes on several image classification assignments because the pre-trained models have gained good knowledge about the features by training on a large number of images. Since, medical image datasets are hard to collect so transfer learning (Alexnet) [1] is used. Later on, after successful classification the aim is to find the exact location of the tumor and this is achieved using basics of image processing inspired by well-known technique of Mask R-CNN [9].


2020 ◽  
Vol 10 (10) ◽  
pp. 3359 ◽  
Author(s):  
Ibrahem Kandel ◽  
Mauro Castelli

Accurate classification of medical images is of great importance for correct disease diagnosis. The automation of medical image classification is of great necessity because it can provide a second opinion or even a better classification in case of a shortage of experienced medical staff. Convolutional neural networks (CNN) were introduced to improve the image classification domain by eliminating the need to manually select which features to use to classify images. Training CNN from scratch requires very large annotated datasets that are scarce in the medical field. Transfer learning of CNN weights from another large non-medical dataset can help overcome the problem of medical image scarcity. Transfer learning consists of fine-tuning CNN layers to suit the new dataset. The main questions when using transfer learning are how deeply to fine-tune the network and what difference in generalization that will make. In this paper, all of the experiments were done on two histopathology datasets using three state-of-the-art architectures to systematically study the effect of block-wise fine-tuning of CNN. Results show that fine-tuning the entire network is not always the best option; especially for shallow networks, alternatively fine-tuning the top blocks can save both time and computational power and produce more robust classifiers.


2022 ◽  
Vol 14 (2) ◽  
pp. 355
Author(s):  
Zhen Cheng ◽  
Guanying Huo ◽  
Haisen Li

Due to the strong speckle noise caused by the seabed reverberation which makes it difficult to extract discriminating and noiseless features of a target, recognition and classification of underwater targets using side-scan sonar (SSS) images is a big challenge. Moreover, unlike classification of optical images which can use a large dataset to train the classifier, classification of SSS images usually has to exploit a very small dataset for training, which may cause classifier overfitting. Compared with traditional feature extraction methods using descriptors—such as Haar, SIFT, and LBP—deep learning-based methods are more powerful in capturing discriminating features. After training on a large optical dataset, e.g., ImageNet, direct fine-tuning method brings improvement to the sonar image classification using a small-size SSS image dataset. However, due to the different statistical characteristics between optical images and sonar images, transfer learning methods—e.g., fine-tuning—lack cross-domain adaptability, and therefore cannot achieve very satisfactory results. In this paper, a multi-domain collaborative transfer learning (MDCTL) method with multi-scale repeated attention mechanism (MSRAM) is proposed for improving the accuracy of underwater sonar image classification. In the MDCTL method, low-level characteristic similarity between SSS images and synthetic aperture radar (SAR) images, and high-level representation similarity between SSS images and optical images are used together to enhance the feature extraction ability of the deep learning model. Using different characteristics of multi-domain data to efficiently capture useful features for the sonar image classification, MDCTL offers a new way for transfer learning. MSRAM is used to effectively combine multi-scale features to make the proposed model pay more attention to the shape details of the target excluding the noise. Experimental results of classification show that, in using multi-domain data sets, the proposed method is more stable with an overall accuracy of 99.21%, bringing an improvement of 4.54% compared with the fine-tuned VGG19. Results given by diverse visualization methods also demonstrate that the method is more powerful in feature representation by using the MDCTL and MSRAM.


2018 ◽  
Vol 14 (2) ◽  
Author(s):  
Sunil Sharma ◽  
Saumil Maheshwari ◽  
Anupam Shukla

Abstract Deep convolution neural networks (CNNs) have demonstrated their capabilities in modern-day medical image classification and analysis. The vital edge of deep CNN over other techniques is their ability to train without expert knowledge. Time bound detection is very beneficial for the early cure of disease. In this paper, a deep CNN architecture is proposed to classify nondiabetic retinopathy and diabetic retinopathy fundus eye images. Kaggle 2015 diabetic retinopathy competition dataset and messier experiment dataset are used in this study. The proposed deep CNN algorithm produces significant results with 93% area under the curve (AUC) for the Kaggle dataset and 91% AUC for the Messidor dataset. The sensitivity and specificity for the Kaggle dataset are 90.22% and 85.13%, respectively; the corresponding values of the Messidor dataset are 91.07% and 80.23%, respectively. The results outperformed many existing studies. The present architecture is a promising tool for diabetic retinopathy image classification.


2021 ◽  
pp. 104529
Author(s):  
Sara Dilshad ◽  
Nikhil Singh ◽  
M. Atif ◽  
Atif Hanif ◽  
Nafeesah Yaqub ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document