scholarly journals An efficient classification of flower images with convolutional neural networks

2017 ◽  
Vol 7 (1.1) ◽  
pp. 384 ◽  
Author(s):  
M V.D. Prasad ◽  
B JwalaLakshmamma ◽  
A Hari Chandana ◽  
K Komali ◽  
M V.N. Manoja ◽  
...  

Machine learning is penetrating most of the classification and recognition tasks performed by a computer. This paper proposes the classification of flower images using a powerful artificial intelligence tool, convolutional neural networks (CNN). A flower image database with 9500 images is considered for the experimentation. The entire database is sub categorized into 4. The CNN training is initiated in five batches and the testing is carried out on all the for datasets. Different CNN architectures were designed and tested with our flower image data to obtain better accuracy in recognition. Various pooling schemes were implemented to improve the classification rates. We achieved 97.78% recognition rate compared to other classifier models reported on the same dataset.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adam Goodwin ◽  
Sanket Padmanabhan ◽  
Sanchit Hira ◽  
Margaret Glancey ◽  
Monet Slinowsky ◽  
...  

AbstractWith over 3500 mosquito species described, accurate species identification of the few implicated in disease transmission is critical to mosquito borne disease mitigation. Yet this task is hindered by limited global taxonomic expertise and specimen damage consistent across common capture methods. Convolutional neural networks (CNNs) are promising with limited sets of species, but image database requirements restrict practical implementation. Using an image database of 2696 specimens from 67 mosquito species, we address the practical open-set problem with a detection algorithm for novel species. Closed-set classification of 16 known species achieved 97.04 ± 0.87% accuracy independently, and 89.07 ± 5.58% when cascaded with novelty detection. Closed-set classification of 39 species produces a macro F1-score of 86.07 ± 1.81%. This demonstrates an accurate, scalable, and practical computer vision solution to identify wild-caught mosquitoes for implementation in biosurveillance and targeted vector control programs, without the need for extensive image database development for each new target region.


Animals ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 1263
Author(s):  
Zhaojun Wang ◽  
Jiangning Wang ◽  
Congtian Lin ◽  
Yan Han ◽  
Zhaosheng Wang ◽  
...  

With the rapid development of digital technology, bird images have become an important part of ornithology research data. However, due to the rapid growth of bird image data, it has become a major challenge to effectively process such a large amount of data. In recent years, deep convolutional neural networks (DCNNs) have shown great potential and effectiveness in a variety of tasks regarding the automatic processing of bird images. However, no research has been conducted on the recognition of habitat elements in bird images, which is of great help when extracting habitat information from bird images. Here, we demonstrate the recognition of habitat elements using four DCNN models trained end-to-end directly based on images. To carry out this research, an image database called Habitat Elements of Bird Images (HEOBs-10) and composed of 10 categories of habitat elements was built, making future benchmarks and evaluations possible. Experiments showed that good results can be obtained by all the tested models. ResNet-152-based models yielded the best test accuracy rate (95.52%); the AlexNet-based model yielded the lowest test accuracy rate (89.48%). We conclude that DCNNs could be efficient and useful for automatically identifying habitat elements from bird images, and we believe that the practical application of this technology will be helpful for studying the relationships between birds and habitat elements.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lara Lloret Iglesias ◽  
Pablo Sanz Bellón ◽  
Amaia Pérez del Barrio ◽  
Pablo Menéndez Fernández-Miranda ◽  
David Rodríguez González ◽  
...  

AbstractDeep learning is nowadays at the forefront of artificial intelligence. More precisely, the use of convolutional neural networks has drastically improved the learning capabilities of computer vision applications, being able to directly consider raw data without any prior feature extraction. Advanced methods in the machine learning field, such as adaptive momentum algorithms or dropout regularization, have dramatically improved the convolutional neural networks predicting ability, outperforming that of conventional fully connected neural networks. This work summarizes, in an intended didactic way, the main aspects of these cutting-edge techniques from a medical imaging perspective.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

<p>Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model’s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.</p>


2018 ◽  
Vol 38 (3) ◽  
Author(s):  
Miao Wu ◽  
Chuanbo Yan ◽  
Huiqiang Liu ◽  
Qian Liu

Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images.


Author(s):  
Lucas Garcia Nachtigall ◽  
Ricardo Matsumura Araujo ◽  
Gilmar Ribeiro Nachtigall

Rapid diagnosis of symptoms caused by pest attack, diseases and nutritional or physiological disorders in apple orchards is essential to avoid greater losses. This paper aimed to evaluate the efficiency of Convolutional Neural Networks (CNN) to automatically detect and classify symptoms of diseases, nutritional deficiencies and damage caused by herbicides in apple trees from images of their leaves and fruits. A novel data set was developed containing labeled examples consisting of approximately 10,000 images of leaves and apple fruits divided into 12 classes, which were classified by algorithms of machine learning, with emphasis on models of deep learning. The results showed trained CNNs can overcome the performance of experts and other algorithms of machine learning in the classification of symptoms in apple trees from leaves images, with an accuracy of 97.3% and obtain 91.1% accuracy with fruit images. In this way, the use of Convolutional Neural Networks may enable the diagnosis of symptoms in apple trees in a fast, precise and usual way.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Patrick Beyersdorffer ◽  
Wolfgang Kunert ◽  
Kai Jansen ◽  
Johanna Miller ◽  
Peter Wilhelm ◽  
...  

Abstract Uncontrolled movements of laparoscopic instruments can lead to inadvertent injury of adjacent structures. The risk becomes evident when the dissecting instrument is located outside the field of view of the laparoscopic camera. Technical solutions to ensure patient safety are appreciated. The present work evaluated the feasibility of an automated binary classification of laparoscopic image data using Convolutional Neural Networks (CNN) to determine whether the dissecting instrument is located within the laparoscopic image section. A unique record of images was generated from six laparoscopic cholecystectomies in a surgical training environment to configure and train the CNN. By using a temporary version of the neural network, the annotation of the training image files could be automated and accelerated. A combination of oversampling and selective data augmentation was used to enlarge the fully labeled image data set and prevent loss of accuracy due to imbalanced class volumes. Subsequently the same approach was applied to the comprehensive, fully annotated Cholec80 database. The described process led to the generation of extensive and balanced training image data sets. The performance of the CNN-based binary classifiers was evaluated on separate test records from both databases. On our recorded data, an accuracy of 0.88 with regard to the safety-relevant classification was achieved. The subsequent evaluation on the Cholec80 data set yielded an accuracy of 0.84. The presented results demonstrate the feasibility of a binary classification of laparoscopic image data for the detection of adverse events in a surgical training environment using a specifically configured CNN architecture.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
P. V. V. Kishore ◽  
K. V. V. Kumar ◽  
E. Kiran Kumar ◽  
A. S. C. S. Sastry ◽  
M. Teja Kiran ◽  
...  

Extracting and recognizing complex human movements from unconstrained online/offline video sequence is a challenging task in computer vision. This paper proposes the classification of Indian classical dance actions using a powerful artificial intelligence tool: convolutional neural networks (CNN). In this work, human action recognition on Indian classical dance videos is performed on recordings from both offline (controlled recording) and online (live performances, YouTube) data. The offline data is created with ten different subjects performing 200 familiar dance mudras/poses from different Indian classical dance forms under various background environments. The online dance data is collected from YouTube for ten different subjects. Each dance pose is occupied for 60 frames or images in a video in both the cases. CNN training is performed with 8 different sample sizes, each consisting of multiple sets of subjects. The remaining 2 samples are used for testing the trained CNN. Different CNN architectures were designed and tested with our data to obtain a better accuracy in recognition. We achieved a 93.33% recognition rate compared to other classifier models reported on the same dataset.


2021 ◽  
Author(s):  
Peter Warren ◽  
Hessein Ali ◽  
Hossein Ebrahimi ◽  
Ranajay Ghosh

Abstract Several image processing methods have been implemented over recent years to assist and partially replace on-site technician visual inspection of both manufactured parts and operational equipments. Convolutional neural networks (CNNs) have seen great success in their ability to both identify and classify anomalies within images, in some cases they do this to a higher degree of accuracy than an expert human. Several parts that are manufactured for various aspects of turbomachinery operation must undergo a visual inspection prior to qualification. Machine learning techniques can streamline these visual inspection processes and increase both efficiency and accuracy of defect detection and classification. The adoption of CNNs to manufactured part inspection can also help to improve manufacturing methods by rapidly retrieving data for overall system improvement. In this work a dataset of images with a variety of surface defects and some without defects will be fed through varying CNN set-ups for the rapid identification and classification of the flaws within the images. This work will examine the techniques used to create CNNs and how they can best be applied to part surface image data, and determine the most accurate and efficient techniques that should be implemented. By combining machine learning with non-destructive evaluation methods component health can be rapidly determined and create a more robust system for manufactured parts and operational equipment evaluation.


2019 ◽  
Vol 53 (2) ◽  
pp. 142-155 ◽  
Author(s):  
Wonjoon Kim ◽  
Byungki Jin ◽  
Sanghyun Choo ◽  
Chang S. Nam ◽  
Myung Hwan Yun

Purpose Sitting in a chair is a typical act of modern people. Prolonged sitting and sitting with improper postures can lead to musculoskeletal disorders. Thus, there is a need for a sitting posture classification monitoring system that can predict a sitting posture. The purpose of this paper is to develop a system for classifying children’s sitting postures for the formation of correct postural habits. Design/methodology/approach For the data analysis, a pressure sensor of film type was installed on the seat of the chair, and image data of the postu.re were collected. A total of 26 children participated in the experiment and collected image data for a total of seven postures. The authors used convolutional neural networks (CNN) algorithm consisting of seven layers. In addition, to compare the accuracy of classification, artificial neural networks (ANN) technique, one of the machine learning techniques, was used. Findings The CNN algorithm was used for the sitting position classification and the average accuracy obtained by tenfold cross validation was 97.5 percent. The authors confirmed that classification accuracy through CNN algorithm is superior to conventional machine learning algorithms such as ANN and DNN. Through this study, we confirmed the applicability of the CNN-based algorithm that can be applied to the smart chair to support the correct posture in children. Originality/value This study successfully performed the posture classification of children using CNN technique, which has not been used in related studies. In addition, by focusing on children, we have expanded the scope of the related research area and expected to contribute to the early postural habits of children.


Sign in / Sign up

Export Citation Format

Share Document