scholarly journals Image based flower species classification using CNN

2019 ◽  
Vol 2 (1) ◽  
pp. 182-186
Author(s):  
Santosh Giri

Deep learning is one of the essential parts of machine learning. Applications such as image classification, text recognition, object detection etc. used deep learning architectures. In this paper neural network model was designed for image classification. A NN classifier with one fully connected layer and one softmax layer was designed and feature extraction part of inception v3 model was reused to calculate the feature value of each images. And by using these feature values the NN classifier was trained. By adopting transfer learning mechanism NN classifier was trained with 17 classes of oxford 17 flower image dataset. The system provided final training accuracy of 99 %. After training, system was evaluated with testing dataset images. The mean testing accuracy was 86.4%.

Author(s):  
Mohammed Hamzah Abed ◽  
Atheer Hadi Issa Al-Rammahi ◽  
Mustafa Jawad Radif

Real-time image classification is one of the most challenging issues in understanding images and computer vision domain. Deep learning methods, especially Convolutional Neural Network (CNN), has increased and improved the performance of image processing and understanding. The performance of real-time image classification based on deep learning achieves good results because the training style, and features that are used and extracted from the input image. This work proposes an interesting model for real-time image classification architecture based on deep learning with fully connected layers to extract proper features. The classification is based on the hybrid GoogleNet pre-trained model. The datasets that are used in this work are 15 scene and UC Merced Land-Use datasets, used to test the proposed model. The proposed model achieved 92.4 and 98.8 as a higher accuracy.


2019 ◽  
Vol 26 (11) ◽  
pp. 1181-1188 ◽  
Author(s):  
Isabel Segura-Bedmar ◽  
Pablo Raez

Abstract Objective The goal of the 2018 n2c2 shared task on cohort selection for clinical trials (track 1) is to identify which patients meet the selection criteria for clinical trials. Cohort selection is a particularly demanding task to which natural language processing and deep learning can make a valuable contribution. Our goal is to evaluate several deep learning architectures to deal with this task. Materials and Methods Cohort selection can be formulated as a multilabeling problem whose goal is to determine which criteria are met for each patient record. We explore several deep learning architectures such as a simple convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), and CNN-RNN hybrid architecture. Although our architectures are similar to those proposed in existing deep learning systems for text classification, our research also studies the impact of using a fully connected feedforward layer on the performance of these architectures. Results The RNN and hybrid models provide the best results, though without statistical significance. The use of the fully connected feedforward layer improves the results for all the architectures, except for the hybrid architecture. Conclusions Despite the limited size of the dataset, deep learning methods show promising results in learning useful features for the task of cohort selection. Therefore, they can be used as a previous filter for cohort selection for any clinical trial with a minimum of human intervention, thus reducing the cost and time of clinical trials significantly.


2020 ◽  
Vol 12 (22) ◽  
pp. 3839
Author(s):  
Xiaomin Tian ◽  
Long Chen ◽  
Xiaoli Zhang ◽  
Erxue Chen

Deep learning has become an effective method for hyperspectral image classification. However, the high band correlation and data volume associated with airborne hyperspectral images, and the insufficiency of training samples, present challenges to the application of deep learning in airborne image classification. Prototypical networks are practical deep learning networks that have demonstrated effectiveness in handling small-sample classification. In this study, an improved prototypical network is proposed (by adding L2 regularization to the convolutional layer and dropout to the maximum pooling layer) to address the problem of overfitting in small-sample classification. The proposed network has an optimal sample window for classification, and the window size is related to the area and distribution of the study area. After performing dimensionality reduction using principal component analysis, the time required for training using hyperspectral images shortened significantly, and the test accuracy increased drastically. Furthermore, when the size of the sample window was 27 × 27 after dimensionality reduction, the overall accuracy of forest species classification was 98.53%, and the Kappa coefficient was 0.9838. Therefore, by using an improved prototypical network with a sample window of an appropriate size, the network yielded desirable classification results, thereby demonstrating its suitability for the fine classification and mapping of tree species.


2019 ◽  
Vol 1 (4) ◽  
pp. 1039-1057 ◽  
Author(s):  
Lili Zhu ◽  
Petros Spachos

Recent developments in machine learning engendered many algorithms designed to solve diverse problems. More complicated tasks can be solved since numerous features included in much larger datasets are extracted by deep learning architectures. The prevailing transfer learning method in recent years enables researchers and engineers to conduct experiments within limited computing and time constraints. In this paper, we evaluated traditional machine learning, deep learning and transfer learning methodologies to compare their characteristics by training and testing on a butterfly dataset, and determined the optimal model to deploy in an Android application. The application can detect the category of a butterfly by either capturing a real-time picture of a butterfly or choosing one picture from the mobile gallery.


2021 ◽  
pp. 263-274
Author(s):  
Gopal Sakarkar ◽  
Ketan Paithankar ◽  
Prateek Dutta ◽  
Gaurav Patil ◽  
Shivam ◽  
...  

2018 ◽  
Author(s):  
Andrea Bizzego ◽  
Nicole Bussola ◽  
Marco Chierici ◽  
Marco Cristoforetti ◽  
Margherita Francescatto ◽  
...  

AbstractArtificial Intelligence is exponentially increasing its impact on healthcare. As deep learning is mastering computer vision tasks, its application to digital pathology is natural, with the promise of aiding in routine reporting and standardizing results across trials. Deep learning features inferred from digital pathology scans can improve validity and robustness of current clinico-pathological features, up to identifying novel histological patterns, e.g. from tumor infiltrating lymphocytes. In this study, we examine the issue of evaluating accuracy of predictive models from deep learning features in digital pathology, as an hallmark of reproducibility. We introduce the DAPPER framework for validation based on a rigorous Data Analysis Plan derived from the FDA’s MAQC project, designed to analyse causes of variability in predictive biomarkers. We apply the framework on models that identify tissue of origin on 787 Whole Slide Images from the Genotype-Tissue Expression (GTEx) project. We test 3 different deep learning architectures (VGG, ResNet, Inception) as feature extractors and three classifiers (a fully connected multilayer, Support Vector Machine and Random Forests) and work with 4 datasets (5, 10, 20 or 30 classes), for a total 53000 tiles at 512 × 512 resolution. We analyze accuracy and feature stability of the machine learning classifiers, also demonstrating the need for random features and random labels diagnostic tests to identify selection bias and risks for reproducibility. Further, we use the deep features from the VGG model from GTEx on the KIMIA24 dataset for identification of slide of origin (24 classes) to train a classifier on 1060 annotated tiles and validated on 265 unseen ones. The DAPPER software, including its deep learning backbone pipeline and the HINT (Histological Imaging - Newsy Tiles) benchmark dataset derived from GTEx, is released as a basis for standardization and validation initiatives in AI for Digital Pathology.Author summaryIn this study, we examine the issue of evaluating accuracy of predictive models from deep learning features in digital pathology, as an hallmark of reproducibility. It is indeed a top priority that reproducibility-by-design gets adopted as standard practice in building and validating AI methods in the healthcare domain. Here we introduce DAPPER, a first framework to evaluate deep features and classifiers in digital pathology, based on a rigorous data analysis plan originally developed in the FDA’s MAQC initiative for predictive biomarkers from massive omics data. We apply DAPPER on models trained to identify tissue of origin from the HINT benchmark dataset of 53000 tiles from 787 Whole Slide Images in the Genotype-Tissue Expression (GTEx) project. We analyze accuracy and feature stability of different deep learning architectures (VGG, ResNet and Inception) as feature extractors and classifiers (a fully connected multilayer, SVMs and Random Forests) on up to 20 classes. Further, we use the deep features from the VGG model (trained on HINT) on the 1300 annotated tiles of the KIMIA24 dataset for identification of slide of origin (24 classes). The DAPPER software is available together with the HINT benchmark dataset.


Author(s):  
MONAN WANG ◽  
DONGHUI LI ◽  
LI TANG

Early classification and diagnosis of lung diseases is essential to increase the best chance of patient recovery and survival. Using deep learning to make it possible, the key is how to improve the robustness of the deep learning model and the accuracy of lung image classification. In order to classify the five lung diseases, we used transfer learning to improve and fine-tune the fully connected layer of VGG16, and improve the cross entropy loss function, combined with the gradient boosting decision tree (GBDT), to establish a deep learning model called a classifier. The model was trained using the ChestX-ray14 dataset. On the test set, the classification accuracy of our model for the five lung diseases was 82.43%, 95.37%, 82.11%, 79.81%, 78.13%, which is better than the best published results. The F1 value is 0.456 (95% CI 0.415, 0.496). The robustness of the model exceeds CheXNet and average performance of doctors. This study clarified that the model has strong robustness and effectiveness in classifying five lung diseases.


2021 ◽  
Author(s):  
Alef Iury S. Ferreira ◽  
Frederico S. Oliveira ◽  
Nádia F. Felipe da Silva ◽  
Anderson S. Soares

O reconhecimento de gênero a partir da fala é um problema relacionado à análise de fala humana, e possui diversas aplicações que vão desde a personalização na recomendação de produtos à ciência forense. A identificação da eficiência e custos de diferentes abordagens que lidam com esse problema é imprescindível. Este trabalho tem como foco investigar e comparar a eficiência e custos de diferentes arquiteturas de deep learning para o reconhecimento de gênero a partir da fala. Os resultados mostram que o modelo convolucional unidimensional consegue os melhores resultados. No entanto, constatou-se que o modelo fully connected apresentou resultados próximos com menor custo, tanto no uso de memória, quanto no tempo de treinamento.


Sign in / Sign up

Export Citation Format

Share Document