scholarly journals Transfer learning for multi-crop leaf disease image classification using convolutional neural networks VGG

Author(s):  
Ananda S. Paymode ◽  
Vandana B. Malode
Geosciences ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 336
Author(s):  
Rafael Pires de Lima ◽  
David Duarte

Convolutional neural networks (CNN) are currently the most widely used tool for the classification of images, especially if such images have large within- and small between- group variance. Thus, one of the main factors driving the development of CNN models is the creation of large, labelled computer vision datasets, some containing millions of images. Thanks to transfer learning, a technique that modifies a model trained on a primary task to execute a secondary task, the adaptation of CNN models trained on such large datasets has rapidly gained popularity in many fields of science, geosciences included. However, the trade-off between two main components of the transfer learning methodology for geoscience images is still unclear: the difference between the datasets used in the primary and secondary tasks; and the amount of available data for the primary task itself. We evaluate the performance of CNN models pretrained with different types of image datasets—specifically, dermatology, histology, and raw food—that are fine-tuned to the task of petrographic thin-section image classification. Results show that CNN models pretrained on ImageNet achieve higher accuracy due to the larger number of samples, as well as a larger variability in the samples in ImageNet compared to the other datasets evaluated.


2019 ◽  
Vol 7 (4) ◽  
pp. 51-70
Author(s):  
Shawon Ashraf ◽  
Ivan Kadery ◽  
Md Abdul Ahad Chowdhury ◽  
Tahsin Zahin Mahbub ◽  
Rashedur M. Rahman

Convolutional neural networks (CNN) are the most popular class of models for image recognition and classification task nowadays. Most of the superstores and fruit vendors resort to human inspection to check the quality of the fruits stored in their inventory. However, this process can be automated. We propose a system that can be trained with a fruit image dataset and then detect whether a fruit is rotten or fresh from an input image. We built the initial model using the Inception V3 model and trained with our dataset applying transfer learning.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jahanzaib Latif ◽  
Shanshan Tu ◽  
Chuangbai Xiao ◽  
Sadaqat Ur Rehman ◽  
Mazhar Sadiq ◽  
...  

In parallel with the development of various emerging fields such as computer vision and related technologies, e.g., iris identification and glaucoma detection, criminals are developing their methods. It is the foremost reason for the blindness of human beings that affects the eye’s optic nerve. Fundus photography is carried out to examine this eye disease. Medical experts evaluate fundus photographs, which is a time-consuming visual inspection. Most current systems for automated glaucoma detection in fundus images rely on segmentation-based features nuanced by the underlying segmentation methods. Convolutional neural networks (CNNs) are powerful tools for solving image classification tasks, as they can learn highly discriminative features from raw pixel intensities. However, their applicability to medical image analysis is limited by the nonavailability of large sets of annotated data required for training. In this work, we aim to accelerate this process using a computer-aided diagnosis of this severe disease with the help of transfer learning based on deep convolutional neural networks. We have suggested the Inception V-3 approach for image classification based on convolution neural networks. Our developed model has the potential to address this CNN model’s problem of classification accuracy, and with imaging data, our proposed method outperforms recent state-of-the-art approaches. The case study for digital forensics is an essential component of emerging technologies, and hence glaucoma detection plays a vital role in it.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document