scholarly journals Classification of Large Biometric Data in Database System

Author(s):  
Varisha Alam* ◽  
◽  
Dr. Mohammad Arif ◽  

"Biometrics" is got from the Greek word 'life' and 'measure' which implies living and evaluation take apart. It simply converts into "life estimation". Biometrics uses computerized acknowledgment of people, dependent on their social and natural attributes. Biometric character are data separated from biometric tests, which can use for examination with a biometric orientation. Biometrics involves techniques to unusually recognize people dependent on at least one inherent physical or behavior attribute. In software engineering, specifically, biometric is used as a form of character retrieve the Committee and retrieve command. Biometric identically utilized to recognize people in bunches that are in observation. Biometric has quickly risen like a auspicious innovation for validation and has effectively discovered a spot in most of the scientific safety regions. An effective bunching method suggest for dividing enormous biometrics data set through recognizable proof. This method depends on the changed B+ tree is decreasing the discs get to. It diminishes the information recovery time and also possible error rates. Hence, for bigger applications, the need to reduce the data set to a more adequate portion emerges to accomplish both higher paces and further developed precision. The main motivation behind ordering is to recover a small data set for looking through the inquiry

Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


2021 ◽  
Vol 45 (4) ◽  
pp. 233-238
Author(s):  
Lazar Kats ◽  
Marilena Vered ◽  
Johnny Kharouba ◽  
Sigalit Blumer

Objective: To apply the technique of transfer deep learning on a small data set for automatic classification of X-ray modalities in dentistry. Study design: For solving the problem of classification, the convolution neural networks based on VGG16, NASNetLarge and Xception architectures were used, which received pre-training on ImageNet subset. In this research, we used an in-house dataset created within the School of Dental Medicine, Tel Aviv University. The training dataset contained anonymized 496 digital Panoramic and Cephalometric X-ray images for orthodontic examinations from CS 8100 Digital Panoramic System (Carestream Dental LLC, Atlanta, USA). The models were trained using NVIDIA GeForce GTX 1080 Ti GPU. The study was approved by the ethical committee of Tel Aviv University. Results: The test dataset contained 124 X-ray images from 2 different devices: CS 8100 Digital Panoramic System and Planmeca ProMax 2D (Planmeca, Helsinki, Finland). X-ray images in the test database were not pre-processed. The accuracy of all neural network architectures was 100%. Following a result of almost absolute accuracy, the other statistical metrics were not relevant. Conclusions: In this study, good results have been obtained for the automatic classification of different modalities of X-ray images used in dentistry. The most promising direction for the development of this kind of application is the transfer deep learning. Further studies on automatic classification of modalities, as well as sub-modalities, can maximally reduce occasional difficulties arising in this field in the daily practice of the dentist and, eventually, improve the quality of diagnosis and treatment.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kuba Weimann ◽  
Tim O. F. Conrad

AbstractRemote monitoring devices, which can be worn or implanted, have enabled a more effective healthcare for patients with periodic heart arrhythmia due to their ability to constantly monitor heart activity. However, these devices record considerable amounts of electrocardiogram (ECG) data that needs to be interpreted by physicians. Therefore, there is a growing need to develop reliable methods for automatic ECG interpretation to assist the physicians. Here, we use deep convolutional neural networks (CNN) to classify raw ECG recordings. However, training CNNs for ECG classification often requires a large number of annotated samples, which are expensive to acquire. In this work, we tackle this problem by using transfer learning. First, we pretrain CNNs on the largest public data set of continuous raw ECG signals. Next, we finetune the networks on a small data set for classification of Atrial Fibrillation, which is the most common heart arrhythmia. We show that pretraining improves the performance of CNNs on the target task by up to $$6.57\%$$ 6.57 % , effectively reducing the number of annotations required to achieve the same performance as CNNs that are not pretrained. We investigate both supervised as well as unsupervised pretraining approaches, which we believe will increase in relevance, since they do not rely on the expensive ECG annotations. The code is available on GitHub at https://github.com/kweimann/ecg-transfer-learning.


Author(s):  
M. Jeyanthi ◽  
C. Velayutham

In Science and Technology Development BCI plays a vital role in the field of Research. Classification is a data mining technique used to predict group membership for data instances. Analyses of BCI data are challenging because feature extraction and classification of these data are more difficult as compared with those applied to raw data. In this paper, We extracted features using statistical Haralick features from the raw EEG data . Then the features are Normalized, Binning is used to improve the accuracy of the predictive models by reducing noise and eliminate some irrelevant attributes and then the classification is performed using different classification techniques such as Naïve Bayes, k-nearest neighbor classifier, SVM classifier using BCI dataset. Finally we propose the SVM classification algorithm for the BCI data set.


2020 ◽  
Vol 10 (20) ◽  
pp. 7141
Author(s):  
Ilhwan Lim ◽  
Minhye Seo ◽  
Dong Hoon Lee ◽  
Jong Hwan Park

Fuzzy vector signature (FVS) is a new primitive where a fuzzy (biometric) data w is used to generate a verification key (VKw), and, later, a distinct fuzzy (biometric) data w′ (as well as a message) is used to generate a signature (σw′). The primary feature of FVS is that the signature (σw′) can be verified under the verification key (VKw) only if w is close to w′ in a certain predefined distance. Recently, Seo et al. proposed an FVS scheme that was constructed (loosely) using a subset-based sampling method to reduce the size of helper data. However, their construction fails to provide the reusability property that requires that no adversary gains the information on fuzzy (biometric) data even if multiple verification keys and relevant signatures of a single user, which are all generated with correlated fuzzy (biometric) data, are exposed to the adversary. In this paper, we propose an improved FVS scheme which is proven to be reusable with respect to arbitrary correlated fuzzy (biometric) inputs. Our efficiency improvement is achieved by strictly applying the subset-based sampling method used before to build a fuzzy extractor by Canetti et al. and by slightly modifying the structure of the verification key. Our FVS scheme can still tolerate sub-linear error rates of input sources and also reduce the signing cost of a user by about half of the original FVS scheme. Finally, we present authentication protocols based on fuzzy extractor and FVS scheme and give performance comparison between them in terms of computation and transmission costs.


2012 ◽  
Vol 197 ◽  
pp. 271-277
Author(s):  
Zhu Ping Gong

Small data set approach is used for the estimation of Largest Lyapunov Exponent (LLE). Primarily, the mean period drawback of Small data set was corrected. On this base, the LLEs of daily qualified rate time series of HZ, an electronic manufacturing enterprise, were estimated and all positive LLEs were taken which indicate that this time series is a chaotic time series and the corresponding produce process is a chaotic process. The variance of the LLEs revealed the struggle between the divergence nature of quality system and quality control effort. LLEs showed sharp increase in getting worse quality level coincide with the company shutdown. HZ’s daily qualified rate, a chaotic time series, shows us the predictable nature of quality system in a short-run.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoya Shiode ◽  
Mototaka Kabashima ◽  
Yuta Hiasa ◽  
Kunihiro Oka ◽  
Tsuyoshi Murase ◽  
...  

AbstractThe purpose of the study was to develop a deep learning network for estimating and constructing highly accurate 3D bone models directly from actual X-ray images and to verify its accuracy. The data used were 173 computed tomography (CT) images and 105 actual X-ray images of a healthy wrist joint. To compensate for the small size of the dataset, digitally reconstructed radiography (DRR) images generated from CT were used as training data instead of actual X-ray images. The DRR-like images were generated from actual X-ray images in the test and adapted to the network, and high-accuracy estimation of a 3D bone model from a small data set was possible. The 3D shape of the radius and ulna were estimated from actual X-ray images with accuracies of 1.05 ± 0.36 and 1.45 ± 0.41 mm, respectively.


Author(s):  
Jungeui Hong ◽  
Elizabeth A. Cudney ◽  
Genichi Taguchi ◽  
Rajesh Jugulum ◽  
Kioumars Paryani ◽  
...  

The Mahalanobis-Taguchi System is a diagnosis and predictive method for analyzing patterns in multivariate cases. The goal of this study is to compare the ability of the Mahalanobis-Taguchi System and a neural network to discriminate using small data sets. We examine the discriminant ability as a function of data set size using an application area where reliable data is publicly available. The study uses the Wisconsin Breast Cancer study with nine attributes and one class.


Sign in / Sign up

Export Citation Format

Share Document