scholarly journals Face–Iris Multimodal Biometric Identification System

Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 85 ◽  
Author(s):  
Basma Ammour ◽  
Larbi Boubchir ◽  
Toufik Bouden ◽  
Messaoud Ramdani

Multimodal biometrics technology has recently gained interest due to its capacity to overcome certain inherent limitations of the single biometric modalities and to improve the overall recognition rate. A common biometric recognition system consists of sensing, feature extraction, and matching modules. The robustness of the system depends much more on the reliability to extract relevant information from the single biometric traits. This paper proposes a new feature extraction technique for a multimodal biometric system using face–iris traits. The iris feature extraction is carried out using an efficient multi-resolution 2D Log-Gabor filter to capture textural information in different scales and orientations. On the other hand, the facial features are computed using the powerful method of singular spectrum analysis (SSA) in conjunction with the wavelet transform. SSA aims at expanding signals or images into interpretable and physically meaningful components. In this study, SSA is applied and combined with the normal inverse Gaussian (NIG) statistical features derived from wavelet transform. The fusion process of relevant features from the two modalities are combined at a hybrid fusion level. The evaluation process is performed on a chimeric database and consists of Olivetti research laboratory (ORL) and face recognition technology (FERET) for face and Chinese academy of science institute of automation (CASIA) v3.0 iris image database (CASIA V3) interval for iris. Experimental results show the robustness.

2015 ◽  
Vol 40 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Sayf A. Majeed ◽  
Hafizah Husain ◽  
Salina A. Samad

Abstract In this paper, a new feature-extraction method is proposed to achieve robustness of speech recognition systems. This method combines the benefits of phase autocorrelation (PAC) with bark wavelet transform. PAC uses the angle to measure correlation instead of the traditional autocorrelation measure, whereas the bark wavelet transform is a special type of wavelet transform that is particularly designed for speech signals. The extracted features from this combined method are called phase autocorrelation bark wavelet transform (PACWT) features. The speech recognition performance of the PACWT features is evaluated and compared to the conventional feature extraction method mel frequency cepstrum coefficients (MFCC) using TI-Digits database under different types of noise and noise levels. This database has been divided into male and female data. The result shows that the word recognition rate using the PACWT features for noisy male data (white noise at 0 dB SNR) is 60%, whereas it is 41.35% for the MFCC features under identical conditions


Author(s):  
Zainab J. Ahmed ◽  
Loay E. George

This investigation proposed an identification system of offline signature by utilizing rotation compensation depending on the features that were saved in the database. The proposed system contains five principle stages, they are: (1) data acquisition, (2) signature data file loading, (3) signature preprocessing, (4) feature extraction, and (5) feature matching. The feature extraction includes determination of the center point coordinates, and the angle for rotation compensation (θ), implementation of rotation compensation, determination of discriminating features and statistical condition. During this work seven essential collections of features are utilized to acquire the characteristics: (i) density (D), (ii) average (A), (iii) standard deviation (S) and integrated between them (iv) density and average (DA), (v) density and standard deviation (DS), (vi) average and standard deviation (AS), and finally (vii) density with average and standard deviation (DAS). The determined values of features are assembled in a feature vector used to distinguish signatures belonging to different persons. The utilized two Euclidean distance measures for matching stage are: (i) normalized mean absolute distance (nMAD) (ii) normalized mean squared distance (nMSD). The suggested system is tested by a public dataset collect from 612 images of handwritten signatures. The best recognition rate (i.e., 98.9%) is achieved in the proposed system using number of blocks (21×21) in density feature set. With the same number of blocks (i.e., 21×21) the maximum verification accuracy obtained is (100%).


Author(s):  
Manish M. Kayasth ◽  
Bharat C. Patel

The entire character recognition system is logically characterized into different sections like Scanning, Pre-processing, Classification, Processing, and Post-processing. In the targeted system, the scanned image is first passed through pre-processing modules then feature extraction, classification in order to achieve a high recognition rate. This paper describes mainly on Feature extraction and Classification technique. These are the methodologies which play an important role to identify offline handwritten characters specifically in Gujarati language. Feature extraction provides methods with the help of which characters can identify uniquely and with high degree of accuracy. Feature extraction helps to find the shape contained in the pattern. Several techniques are available for feature extraction and classification, however the selection of an appropriate technique based on its input decides the degree of accuracy of recognition. 


Author(s):  
El mehdi Cherrat ◽  
Rachid Alaoui ◽  
Hassane Bouzahir

<p>In this paper, we present a multimodal biometric recognition system that combines fingerprint, fingervein and face images based on cascade advanced and decision level fusion. First, in fingerprint recognition system, the images are enhanced using gabor filter, binarized and passed to thinning method. Then, the minutiae points are extracted to identify that an individual is genuine or impostor. In fingervein recognition system, image processing is required using Linear Regression Line, Canny and local histogram equalization technique to improve better the quality of images. Next, the features are obtained using Histogram of Oriented Gradient (HOG). Moreover, the Convolutional Neural Networks (CNN) and the Local Binary Pattern (LBP) are applied to detect and extract the features of the face images, respectively. In addition, we proposed three different modes in our work. At the first, the person is identified when the recognition system of one single biometric modality is matched. At the second, the fusion is achieved at cascade decision level method based on AND rule when the recognition system of both biometric traits is validated. At the last mode, the fusion is accomplished at decision level method based on AND rule using three types of biometric. The simulation results have demonstrated that the proposed fusion algorithm increases the accuracy to 99,43% than the other system based on unimodal or bimodal characteristics.</p>


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Jingchao Li ◽  
Jian Guo

Identifying communication signals under low SNR environment has become more difficult due to the increasingly complex communication environment. Most relevant literatures revolve around signal recognition under stable SNR, but not applicable under time-varying SNR environment. To solve this problem, we propose a new feature extraction method based on entropy cloud characteristics of communication modulation signals. The proposed algorithm extracts the Shannon entropy and index entropy characteristics of the signals first and then effectively combines the entropy theory and cloud model theory together. Compared with traditional feature extraction methods, instability distribution characteristics of the signals’ entropy characteristics can be further extracted from cloud model’s digital characteristics under low SNR environment by the proposed algorithm, which improves the signals’ recognition effects significantly. The results from the numerical simulations show that entropy cloud feature extraction algorithm can achieve better signal recognition effects, and even when the SNR is −11 dB, the signal recognition rate can still reach 100%.


2020 ◽  
Author(s):  
Hoda Heidari ◽  
Zahra Einalou ◽  
Mehrdad Dadgostar ◽  
Hamidreza Hosseinzadeh

Abstract Most of the studies in the field of Brain-Computer Interface (BCI) based on electroencephalography have a wide range of applications. Extracting Steady State Visual Evoked Potential (SSVEP) is regarded as one of the most useful tools in BCI systems. In this study, different methods such as feature extraction with different spectral methods (Shannon entropy, skewness, kurtosis, mean, variance) (bank of filters, narrow-bank IIR filters, and wavelet transform magnitude), feature selection performed by various methods (decision tree, principle component analysis (PCA), t-test, Wilcoxon, Receiver operating characteristic (ROC)), and classification step applying k nearest neighbor (k-NN), perceptron, support vector machines (SVM), Bayesian, multiple layer perceptron (MLP) were compared from the whole stream of signal processing. Through combining such methods, the effective overview of the study indicated the accuracy of classical methods. In addition, the present study relied on a rather new feature selection described by decision tree and PCA, which is used for the BCI-SSVEP systems. Finally, the obtained accuracies were calculated based on the four recorded frequencies representing four directions including right, left, up, and down.


2014 ◽  
Vol 2 (2) ◽  
pp. 43-53 ◽  
Author(s):  
S. Rojathai ◽  
M. Venkatesulu

In speech word recognition systems, feature extraction and recognition plays a most significant role. More number of feature extraction and recognition methods are available in the existing speech word recognition systems. In most recent Tamil speech word recognition system has given high speech word recognition performance with PAC-ANFIS compared to the earlier Tamil speech word recognition systems. So the investigation of speech word recognition by various recognition methods is needed to prove their performance in the speech word recognition. This paper presents the investigation process with well known Artificial Intelligence method as Feed Forward Back Propagation Neural Network (FFBNN) and Adaptive Neuro Fuzzy Inference System (ANFIS). The Tamil speech word recognition system with PAC-FFBNN performance is analyzed in terms of statistical measures and Word Recognition Rate (WRR) and compared with PAC-ANFIS and other existing Tamil speech word recognition systems.


2019 ◽  
Vol 19 (01) ◽  
pp. 1940008 ◽  
Author(s):  
ÖZAL YILDIRIM

Electrocardiogram (ECG) signals consist of data containing measurements of electrical activity in the heartbeats. These signals include relevant information used to detect abnormalities such as arrhythmia. In this study, a recognition system is proposed for detection and classification of heartbeats in ECG signals. Heartbeats in the ECG data were detected by using the wavelet transform (WT) method and these beats are segmented with determined periods. For obtaining distinctive features from the beats, multi-resolution WT is applied to these segmented signals, and wavelet coefficients are obtained from different frequency levels. Feature vectors are generated on these coefficients by using various statistical methods. The proposed recognition system is trained on feature vectors by using the Online Sequential Extreme Learning Machine (OSELM) classifier during the learning phase to automatically recognize the signals. Five different beat types were obtained from the MIT-BIH arrhythmia dataset. The multi-class dataset that includes five classes and the binary-class dataset that includes two classes were created among these beat types. Performance tests of the proposed wavelet-based-OSELM (W-OSELM) method were realized with these two datasets. The proposed recognition system provided 97.29% correct beat detection rate from raw ECG signals. The classification accuracy is 99.44% for the binary-class dataset and 98.51% for the multi-class dataset. Furthermore, the proposed classifier has shown very fast recognition performance on ECG signals.


Symmetry ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 725 ◽  
Author(s):  
Jian Wan ◽  
Xin Yu ◽  
Qiang Guo

The electronic reconnaissance system is the operational guarantee and premise of electronic warfare. It is an important tool for intercepting radar signals and providing intelligence support for sensing the battlefield situation. In this paper, a radar waveform automatic identification system for detecting, tracking and locating low probability interception (LPI) radar is studied. The recognition system can recognize 12 different radar waveform: binary phase shift keying (Barker codes modulation), linear frequency modulation (LFM), Costas codes, polytime codes (T1, T2, T3, and T4), and polyphase codes (comprising Frank, P1, P2, P3 and P4). First, the system performs time–frequency transform on the LPI radar signal to obtain a two-dimensional time–frequency image. Then, the time–frequency image is preprocessed (binarization and size conversion). The preprocessed time–frequency image is then sent to the convolutional neural network (CNN) for training. After the training is completed, the features of the fully connected layer are extracted. Finally, the feature is sent to the tree structure-based machine learning process optimization (TPOT) classifier to realize offline training and online recognition. The experimental results show that the overall recognition rate of the system reaches 94.42% when the signal-to-noise ratio (SNR) is −4 dB.


Sign in / Sign up

Export Citation Format

Share Document