scholarly journals Fingerprint Classification Based on Deep Learning Approaches: Experimental Findings and Comparisons

Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 750
Author(s):  
Carmelo Militello ◽  
Leonardo Rundo ◽  
Salvatore Vitabile ◽  
Vincenzo Conti

Biometric classification plays a key role in fingerprint characterization, especially in the identification process. In fact, reducing the number of comparisons in biometric recognition systems is essential when dealing with large-scale databases. The classification of fingerprints aims to achieve this target by splitting fingerprints into different categories. The general approach of fingerprint classification requires pre-processing techniques that are usually computationally expensive. Deep Learning is emerging as the leading field that has been successfully applied to many areas, such as image processing. This work shows the performance of pre-trained Convolutional Neural Networks (CNNs), tested on two fingerprint databases—namely, PolyU and NIST—and comparisons to other results presented in the literature in order to establish the type of classification that allows us to obtain the best performance in terms of precision and model efficiency, among approaches under examination, namely: AlexNet, GoogLeNet, and ResNet. We present the first study that extensively compares the most used CNN architectures by classifying the fingerprints into four, five, and eight classes. From the experimental results, the best performance was obtained in the classification of the PolyU database by all the tested CNN architectures due to the higher quality of its samples. To confirm the reliability of our study and the results obtained, a statistical analysis based on the McNemar test was performed.

2019 ◽  
Vol 9 (7) ◽  
pp. 1385 ◽  
Author(s):  
Luca Donati ◽  
Eleonora Iotti ◽  
Giulio Mordonini ◽  
Andrea Prati

Visual classification of commercial products is a branch of the wider fields of object detection and feature extraction in computer vision, and, in particular, it is an important step in the creative workflow in fashion industries. Automatically classifying garment features makes both designers and data experts aware of their overall production, which is fundamental in order to organize marketing campaigns, avoid duplicates, categorize apparel products for e-commerce purposes, and so on. There are many different techniques for visual classification, ranging from standard image processing to machine learning approaches: this work, made by using and testing the aforementioned approaches in collaboration with Adidas AG™, describes a real-world study aimed at automatically recognizing and classifying logos, stripes, colors, and other features of clothing, solely from final rendering images of their products. Specifically, both deep learning and image processing techniques, such as template matching, were used. The result is a novel system for image recognition and feature extraction that has a high classification accuracy and which is reliable and robust enough to be used by a company like Adidas. This paper shows the main problems and proposed solutions in the development of this system, and the experimental results on the Adidas AG™ dataset.


Skin lesion growth of unwanted cells on the upper most layer of skin. These lesions may conation cancerous cells which may lead to health issues to the patient and in severe cases may lead to patient’s demise. Dermatologists identify type of skin cancer by identifying it in image generated using dermatoscope and procedure known as Dermatoscopy. Previously there have been many studies which show classification of these dermatoscopic images using machine learning and deep learning solutions. Machine learning approaches use image processing techniques for identifying mole in given image and then for classification researchers have used techniques like SVM , random forest etc. With advances in field of deep learning there have been various methods proposed on classification of using CNN which achieves more precision and accuracy. In this paper we are proposing a CNN based approach for image classification with best overall accuracy of 78.08% and good multiclass AUC for all classes in HAM10000 dataset.


Author(s):  
F. Matrone ◽  
A. Lingua ◽  
R. Pierdicca ◽  
E. S. Malinverni ◽  
M. Paolanti ◽  
...  

Abstract. The lack of benchmarking data for the semantic segmentation of digital heritage scenarios is hampering the development of automatic classification solutions in this field. Heritage 3D data feature complex structures and uncommon classes that prevent the simple deployment of available methods developed in other fields and for other types of data. The semantic classification of heritage 3D data would support the community in better understanding and analysing digital twins, facilitate restoration and conservation work, etc. In this paper, we present the first benchmark with millions of manually labelled 3D points belonging to heritage scenarios, realised to facilitate the development, training, testing and evaluation of machine and deep learning methods and algorithms in the heritage field. The proposed benchmark, available at http://archdataset.polito.it/, comprises datasets and classification results for better comparisons and insights into the strengths and weaknesses of different machine and deep learning approaches for heritage point cloud semantic segmentation, in addition to promoting a form of crowdsourcing to enrich the already annotated database.


2018 ◽  
Vol 10 (11) ◽  
pp. 1746 ◽  
Author(s):  
Raffaele Gaetano ◽  
Dino Ienco ◽  
Kenji Ose ◽  
Remi Cresson

The use of Very High Spatial Resolution (VHSR) imagery in remote sensing applications is nowadays a current practice whenever fine-scale monitoring of the earth’s surface is concerned. VHSR Land Cover classification, in particular, is currently a well-established tool to support decisions in several domains, including urban monitoring, agriculture, biodiversity, and environmental assessment. Additionally, land cover classification can be employed to annotate VHSR imagery with the aim of retrieving spatial statistics or areas with similar land cover. Modern VHSR sensors provide data at multiple spatial and spectral resolutions, most commonly as a couple of a higher-resolution single-band panchromatic (PAN) and a coarser multispectral (MS) imagery. In the typical land cover classification workflow, the multi-resolution input is preprocessed to generate a single multispectral image at the highest resolution available by means of a pan-sharpening process. Recently, deep learning approaches have shown the advantages of avoiding data preprocessing by letting machine learning algorithms automatically transform input data to best fit the classification task. Following this rationale, we here propose a new deep learning architecture to jointly use PAN and MS imagery for a direct classification without any prior image sharpening or resampling process. Our method, namely M u l t i R e s o L C C , consists of a two-branch end-to-end network which extracts features from each source at their native resolution and lately combine them to perform land cover classification at the PAN resolution. Experiments are carried out on two real-world scenarios over large areas with contrasted land cover characteristics. The experimental results underline the quality of our method while the characteristics of the proposed scenarios underline the applicability and the generality of our strategy in operational settings.


2022 ◽  
Vol 12 (2) ◽  
pp. 656
Author(s):  
Attapon Palananda ◽  
Warangkhana Kimpan

In the production of coconut oil for consumption, cleanliness and safety are the first priorities for meeting the standard in Thailand. The presence of color, sediment, or impurities is an important element that affects consumers’ or buyers’ decision to buy coconut oil. Coconut oil contains impurities that are revealed during the process of compressing the coconut pulp to extract the oil. Therefore, the oil must be filtered by centrifugation and passed through a fine filter. When the oil filtration process is finished, staff inspect the turbidity of coconut oil by examining the color with the naked eye and should detect only the color of the coconut oil. However, this method cannot detect small impurities, suspended particles that take time to settle and become sediment. Studies have shown that the turbidity of coconut oil can be measured by passing light through the oil and applying image processing techniques. This method makes it possible to detect impurities using a microscopic camera that photographs the coconut oil. This study proposes a method for detecting impurities that cause the turbidity in coconut oil using a deep learning approach called a convolutional neural network (CNN) to solve the problem of impurity identification and image analysis. In the experiments, this paper used two coconut oil impurity datasets, PiCO_V1 and PiCO_V2, containing 1000 and 6861 images, respectively. A total of 10 CNN architectures were tested on these two datasets to determine the accuracy of the best architecture. The experimental results indicated that the MobileNetV2 architecture had the best performance, with the highest training accuracy rate, 94.05%, and testing accuracy rate, 80.20%.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 82
Author(s):  
Ahmad O. Aseeri

Deep Learning-based methods have emerged to be one of the most effective and practical solutions in a wide range of medical problems, including the diagnosis of cardiac arrhythmias. A critical step to a precocious diagnosis in many heart dysfunctions diseases starts with the accurate detection and classification of cardiac arrhythmias, which can be achieved via electrocardiograms (ECGs). Motivated by the desire to enhance conventional clinical methods in diagnosing cardiac arrhythmias, we introduce an uncertainty-aware deep learning-based predictive model design for accurate large-scale classification of cardiac arrhythmias successfully trained and evaluated using three benchmark medical datasets. In addition, considering that the quantification of uncertainty estimates is vital for clinical decision-making, our method incorporates a probabilistic approach to capture the model’s uncertainty using a Bayesian-based approximation method without introducing additional parameters or significant changes to the network’s architecture. Although many arrhythmias classification solutions with various ECG feature engineering techniques have been reported in the literature, the introduced AI-based probabilistic-enabled method in this paper outperforms the results of existing methods in outstanding multiclass classification results that manifest F1 scores of 98.62% and 96.73% with (MIT-BIH) dataset of 20 annotations, and 99.23% and 96.94% with (INCART) dataset of eight annotations, and 97.25% and 96.73% with (BIDMC) dataset of six annotations, for the deep ensemble and probabilistic mode, respectively. We demonstrate our method’s high-performing and statistical reliability results in numerical experiments on the language modeling using the gating mechanism of Recurrent Neural Networks.


Author(s):  
Masaya Tanaka ◽  
Atsushi Saito ◽  
Kosuke Shido ◽  
Yasuhiro Fujisawa ◽  
Kenshi Yamasaki ◽  
...  

2021 ◽  
pp. 1-11
Author(s):  
Tianhong Dai ◽  
Shijie Cong ◽  
Jianping Huang ◽  
Yanwen Zhang ◽  
Xinwang Huang ◽  
...  

In agricultural production, weed removal is an important part of crop cultivation, but inevitably, other plants compete with crops for nutrients. Only by identifying and removing weeds can the quality of the harvest be guaranteed. Therefore, the distinction between weeds and crops is particularly important. Recently, deep learning technology has also been applied to the field of botany, and achieved good results. Convolutional neural networks are widely used in deep learning because of their excellent classification effects. The purpose of this article is to find a new method of plant seedling classification. This method includes two stages: image segmentation and image classification. The first stage is to use the improved U-Net to segment the dataset, and the second stage is to use six classification networks to classify the seedlings of the segmented dataset. The dataset used for the experiment contained 12 different types of plants, namely, 3 crops and 9 weeds. The model was evaluated by the multi-class statistical analysis of accuracy, recall, precision, and F1-score. The results show that the two-stage classification method combining the improved U-Net segmentation network and the classification network was more conducive to the classification of plant seedlings, and the classification accuracy reaches 97.7%.


Sign in / Sign up

Export Citation Format

Share Document