scholarly journals Deep Region of Interest and Feature Extraction Models for Palmprint Verification Using Convolutional Neural Networks Transfer Learning

2018 ◽  
Vol 8 (7) ◽  
pp. 1210 ◽  
Author(s):  
Mahdieh Izadpanahkakhk ◽  
Seyyed Razavi ◽  
Mehran Taghipour-Gorjikolaie ◽  
Seyyed Zahiri ◽  
Aurelio Uncini

Palmprint verification is one of the most significant and popular approaches for personal authentication due to its high accuracy and efficiency. Using deep region of interest (ROI) and feature extraction models for palmprint verification, a novel approach is proposed where convolutional neural networks (CNNs) along with transfer learning are exploited. The extracted palmprint ROIs are fed to the final verification system, which is composed of two modules. These modules are (i) a pre-trained CNN architecture as a feature extractor and (ii) a machine learning classifier. In order to evaluate our proposed model, we computed the intersection over union (IoU) metric for ROI extraction along with accuracy, receiver operating characteristic (ROC) curves, and equal error rate (EER) for the verification task.The experiments demonstrated that the ROI extraction module could significantly find the appropriate palmprint ROIs, and the verification results were crucially precise. This was verified by different databases and classification methods employed in our proposed model. In comparison with other existing approaches, our model was competitive with the state-of-the-art approaches that rely on the representation of hand-crafted descriptors. We achieved a IoU score of 93% and EER of 0.0125 using a support vector machine (SVM) classifier for the contact-based Hong Kong Polytechnic University Palmprint (HKPU) database. It is notable that all codes are open-source and can be accessed online.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yanfei Li ◽  
Xianying Feng ◽  
Yandong Liu ◽  
Xingchang Han

AbstractThis work researched apple quality identification and classification from real images containing complicated disturbance information (background was similar to the surface of the apples). This paper proposed a novel model based on convolutional neural networks (CNN) which aimed at accurate and fast grading of apple quality. Specific, complex, and useful image characteristics for detection and classification were captured by the proposed model. Compared with existing methods, the proposed model could better learn high-order features of two adjacent layers that were not in the same channel but were very related. The proposed model was trained and validated, with best training and validation accuracy of 99% and 98.98% at 2590th and 3000th step, respectively. The overall accuracy of the proposed model tested using an independent 300 apple dataset was 95.33%. The results showed that the training accuracy, overall test accuracy and training time of the proposed model were better than Google Inception v3 model and traditional imaging process method based on histogram of oriented gradient (HOG), gray level co-occurrence matrix (GLCM) features merging and support vector machine (SVM) classifier. The proposed model has great potential in Apple’s quality detection and classification.


Landslides can easily be tragic to human life and property. Increase in the rate of human settlement in the mountains has resulted in safety concerns. Landslides have caused economic loss between 1-2% of the GDP in many developing countries. In this study, we discuss a deep learning approach to detect landslides. Convolutional Neural Networks are used for feature extraction for our proposed model. As there was no source of an exact and precise data set for feature extraction, therefore, a new data set was built for testing the model. We have tested and compared this work with our proposed model and with other machine-learning algorithms such as Logistic Regression, Random Forest, AdaBoost, K-Nearest Neighbors and Support Vector Machine. Our proposed deep learning model produces a classification accuracy of 96.90% outperforming the classical machine-learning algorithms.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Jeong-Hoon Lee ◽  
Hee-Jin Yu ◽  
Min-ji Kim ◽  
Jin-Woo Kim ◽  
Jongeun Choi

Abstract Background Despite the integral role of cephalometric analysis in orthodontics, there have been limitations regarding the reliability, accuracy, etc. of cephalometric landmarks tracing. Attempts on developing automatic plotting systems have continuously been made but they are insufficient for clinical applications due to low reliability of specific landmarks. In this study, we aimed to develop a novel framework for locating cephalometric landmarks with confidence regions using Bayesian Convolutional Neural Networks (BCNN). Methods We have trained our model with the dataset from the ISBI 2015 grand challenge in dental X-ray image analysis. The overall algorithm consisted of a region of interest (ROI) extraction of landmarks and landmarks estimation considering uncertainty. Prediction data produced from the Bayesian model has been dealt with post-processing methods with respect to pixel probabilities and uncertainties. Results Our framework showed a mean landmark error (LE) of 1.53 ± 1.74 mm and achieved a successful detection rate (SDR) of 82.11, 92.28 and 95.95%, respectively, in the 2, 3, and 4 mm range. Especially, the most erroneous point in preceding studies, Gonion, reduced nearly halves of its error compared to the others. Additionally, our results demonstrated significantly higher performance in identifying anatomical abnormalities. By providing confidence regions (95%) that consider uncertainty, our framework can provide clinical convenience and contribute to making better decisions. Conclusion Our framework provides cephalometric landmarks and their confidence regions, which could be used as a computer-aided diagnosis tool and education.


Author(s):  
Pham Van Hai ◽  
Samson Eloanyi Amaechi

Conventional methods used in brain tumors detection, diagnosis, and classification such as magnetic resonance imaging and computed tomography scanning technologies are unbridged in their results. This paper presents a proposed model combination, convolutional neural networks with fuzzy rules in the detection and classification of medical imaging such as healthy brain cell and tumors brain cells. This model contributes fully on the automatic classification and detection medical imaging such as brain tumors, heart diseases, breast cancers, HIV and FLU. The experimental result of the proposed model shows overall accuracy of 97.6%, which indicates that the proposed method achieves improved performance than the other current methods in the literature such as [classification of tumors in human brain MRI using wavelet and support vector machine 94.7%, and deep convolutional neural networks with transfer learning for automated brain image classification 95.0%], uses in the detection, diagnosis, and classification of medical imaging decision supports.


2020 ◽  
Vol 10 (17) ◽  
pp. 5792 ◽  
Author(s):  
Biserka Petrovska ◽  
Tatjana Atanasova-Pacemska ◽  
Roberto Corizzo ◽  
Paolo Mignone ◽  
Petre Lameski ◽  
...  

Remote Sensing (RS) image classification has recently attracted great attention for its application in different tasks, including environmental monitoring, battlefield surveillance, and geospatial object detection. The best practices for these tasks often involve transfer learning from pre-trained Convolutional Neural Networks (CNNs). A common approach in the literature is employing CNNs for feature extraction, and subsequently train classifiers exploiting such features. In this paper, we propose the adoption of transfer learning by fine-tuning pre-trained CNNs for end-to-end aerial image classification. Our approach performs feature extraction from the fine-tuned neural networks and remote sensing image classification with a Support Vector Machine (SVM) model with linear and Radial Basis Function (RBF) kernels. To tune the learning rate hyperparameter, we employ a linear decay learning rate scheduler as well as cyclical learning rates. Moreover, in order to mitigate the overfitting problem of pre-trained models, we apply label smoothing regularization. For the fine-tuning and feature extraction process, we adopt the Inception-v3 and Xception inception-based CNNs, as well the residual-based networks ResNet50 and DenseNet121. We present extensive experiments on two real-world remote sensing image datasets: AID and NWPU-RESISC45. The results show that the proposed method exhibits classification accuracy of up to 98%, outperforming other state-of-the-art methods.


2019 ◽  
Vol 21 (6) ◽  
pp. 2133-2141 ◽  
Author(s):  
Chen-Chen Li ◽  
Bin Liu

Abstract Protein fold recognition is one of the most critical tasks to explore the structures and functions of the proteins based on their primary sequence information. The existing protein fold recognition approaches rely on features reflecting the characteristics of protein folds. However, the feature extraction methods are still the bottleneck of the performance improvement of these methods. In this paper, we proposed two new feature extraction methods called MotifCNN and MotifDCNN to extract more discriminative fold-specific features based on structural motif kernels to construct the motif-based convolutional neural networks (CNNs). The pairwise sequence similarity scores calculated based on fold-specific features are then fed into support vector machines to construct the predictor for fold recognition, and a predictor called MotifCNN-fold has been proposed. Experimental results on the benchmark dataset showed that MotifCNN-fold obviously outperformed all the other competing methods. In particular, the fold-specific features extracted by MotifCNN and MotifDCNN are more discriminative than the fold-specific features extracted by other deep learning techniques, indicating that incorporating the structural motifs into the CNN is able to capture the characteristics of protein folds.


Author(s):  
V. S. Bramhe ◽  
S. K. Ghosh ◽  
P. K. Garg

With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.


Author(s):  
Nibras Ar Rakib ◽  
SM Zamshed Farhan ◽  
Md Mashrur Bari Sobhan ◽  
Jia Uddin ◽  
Arafat Habib

The field of biometrics has evolved tremendously for over the last century. Yet scientists are still continuing to come up with precise and efficient algorithms to facilitate automatic fingerprint recognition systems. Like other applications, an efficient feature extraction method plays an important role in fingerprint based recognition systems. This paper proposes a novel feature extraction method using minutiae points of a fingerprint image and their intersections. In this method, initially, it calculates the ridge ends and ridge bifurcations of each fingerprint image. And then, it estimates the minutiae points for the intersection of each ridge end and ridge bifurcation. In the experimental evaluation, we tested the extracted features of our proposed model using a support vector machine (SVM) classifier and experimental results show that the proposed method can accurately classify different fingerprint images.


Author(s):  
Giovanni Diraco ◽  
Pietro Siciliano ◽  
Alessandro Leone

In the current industrial landscape, increasingly pervaded by technological innovations, the adoption of optimized strategies for asset management is becoming a critical key success factor. Among the various strategies available, the “Prognostics and Health Management” strategy is able to support maintenance management decisions more accurately, through continuous monitoring of equipment health and “Remaining Useful Life” forecasting. In the present study, Convolutional Neural Network-based Deep Neural Network techniques are investigated for the Remaining Useful Life prediction of a punch tool, whose degradation is caused by working surface deformations during the machining process. Surface deformation is determined using a 3D scanning sensor capable of returning point clouds with micrometric accuracy during the operation of the punching machine, avoiding both downtime and human intervention. The 3D point clouds thus obtained are transformed into bidimensional image-type maps, i.e., maps of depths and normal vectors, to fully exploit the potential of convolutional neural networks for extracting features. Such maps are then processed by comparing 15 genetically optimized architectures with the transfer learning of 19 pre-trained models, using a classic machine learning approach, i.e., Support Vector Regression, as a benchmark. The achieved results clearly show that, in this specific case, optimized architectures provide performance far superior (MAPE=0.058) to that of transfer learning which, instead, remains at a lower or slightly higher level (MAPE=0.416) than Support Vector Regression (MAPE=0.857).


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zheng-Yang Zhao ◽  
Wen-Zhun Huang ◽  
Jie Pan ◽  
Yu-An Huang ◽  
Shan-Wen Zhang ◽  
...  

The identification of drug-target interactions (DTIs) plays a crucial role in drug discovery. However, the traditional high-throughput techniques based on clinical trials are costly, cumbersome, and time-consuming for identifying DTIs. Hence, new intelligent computational methods are urgently needed to surmount these defects in predicting DTIs. In this paper, we propose a novel computational method that combines position-specific scoring matrix (PSSM), elastic net based sparse features extraction, and rotation forest (RF) classifier. Specifically, we converted each protein primary sequence into PSSM, which contains biological evolutionary information. Then we extract the hidden sparse feature descriptors in PSSM by elastic net based sparse feature extraction method (ESFE). After that, we fuse them with the features of drug, which are represented by molecular fingerprints. Finally, rotation forest classifier works on detecting the potential drug-target interactions. When performing the proposed method by the experiments of fivefold cross validation (CV) on enzyme, ion channel, G protein-coupled receptors (GPCRs), and nuclear receptor datasets, this method achieves average accuracies of 90.32%, 88.91%, 80.65%, and 79.73%, respectively. We also compared the proposed model with the state-of-the-art support vector machine (SVM) classifier and other effective methods on the same datasets. The comparison results distinctly indicate that the proposed model possesses the efficient and robust ability to predict DTIs. We expect that the new model will be able to take effects on predicting massive DTIs.


Sign in / Sign up

Export Citation Format

Share Document