scholarly journals Using Convolutional Neural Network and a Single Heartbeat for ECG Biometric Recognition

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 733
Author(s):  
Dalal A. AlDuwaile ◽  
Md Saiful Islam

The electrocardiogram (ECG) signal has become a popular biometric modality due to characteristics that make it suitable for developing reliable authentication systems. However, the long segment of signal required for recognition is still one of the limitations of existing ECG biometric recognition methods and affects its acceptability as a biometric modality. This paper investigates how a short segment of an ECG signal can be effectively used for biometric recognition, using deep-learning techniques. A small convolutional neural network (CNN) is designed to achieve better generalization capability by entropy enhancement of a short segment of a heartbeat signal. Additionally, it investigates how various blind and feature-dependent segments with different lengths affect the performance of the recognition system. Experiments were carried out on two databases for performance evaluation that included single and multisession records. In addition, a comparison was made between the performance of the proposed classifier and four well-known CNN models: GoogLeNet, ResNet, MobileNet and EfficientNet. Using a time–frequency domain representation of a short segment of an ECG signal around the R-peak, the proposed model achieved an accuracy of 99.90% for PTB, 98.20% for the ECG-ID mixed-session, and 94.18% for ECG-ID multisession datasets. Using the preprinted ResNet, we obtained 97.28% accuracy for 0.5-second segments around the R-peaks for ECG-ID multisession datasets, outperforming existing methods. It was found that the time–frequency domain representation of a short segment of an ECG signal can be feasible for biometric recognition by achieving better accuracy and acceptability of this modality.

Author(s):  
Dr. I. Jeena Jacob

The biometric recognition plays a significant and a unique part in the applications that are based on the personal identification. This is because of the stability, irreplaceability and the uniqueness that is found in the biometric traits of the humans. Currently the deep learning techniques that are capable of strongly generalizing and automatically learning, with the enhanced accuracy is utilized for the biometric recognition to develop an efficient biometric system. But the poor noise removal abilities and the accuracy degradation caused due to the very small disturbances has made the conventional means of the deep learning that utilizes the convolutional neural network incompatible for the biometric recognition. So the capsule neural network replaces the CNN due to its high accuracy in the recognition and the classification, due to its learning capacities and the ability to be trained with the limited number of samples compared to the CNN (convolutional neural network). The frame work put forward in the paper utilizes the capsule network with the fuzzified image enhancement for the retina based biometric recognition as it is a highly secure and reliable basis of person identification as it is layered behind the eye and cannot be counterfeited. The method was tested with the dataset of face 95 database and the CASIA-Iris-Thousand, and was found to be 99% accurate with the error rate convergence of 0.3% to .5%


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2648
Author(s):  
Muhammad Aamir ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
Muhammad Zeeshan Azam ◽  
...  

Natural disasters not only disturb the human ecological system but also destroy the properties and critical infrastructures of human societies and even lead to permanent change in the ecosystem. Disaster can be caused by naturally occurring events such as earthquakes, cyclones, floods, and wildfires. Many deep learning techniques have been applied by various researchers to detect and classify natural disasters to overcome losses in ecosystems, but detection of natural disasters still faces issues due to the complex and imbalanced structures of images. To tackle this problem, we propose a multilayered deep convolutional neural network. The proposed model works in two blocks: Block-I convolutional neural network (B-I CNN), for detection and occurrence of disasters, and Block-II convolutional neural network (B-II CNN), for classification of natural disaster intensity types with different filters and parameters. The model is tested on 4428 natural images and performance is calculated and expressed as different statistical values: sensitivity (SE), 97.54%; specificity (SP), 98.22%; accuracy rate (AR), 99.92%; precision (PRE), 97.79%; and F1-score (F1), 97.97%. The overall accuracy for the whole model is 99.92%, which is competitive and comparable with state-of-the-art algorithms.


2020 ◽  
Vol 79 (47-48) ◽  
pp. 36063-36075 ◽  
Author(s):  
Valentina Franzoni ◽  
Giulio Biondi ◽  
Alfredo Milani

AbstractCrowds express emotions as a collective individual, which is evident from the sounds that a crowd produces in particular events, e.g., collective booing, laughing or cheering in sports matches, movies, theaters, concerts, political demonstrations, and riots. A critical question concerning the innovative concept of crowd emotions is whether the emotional content of crowd sounds can be characterized by frequency-amplitude features, using analysis techniques similar to those applied on individual voices, where deep learning classification is applied to spectrogram images derived by sound transformations. In this work, we present a technique based on the generation of sound spectrograms from fragments of fixed length, extracted from original audio clips recorded in high-attendance events, where the crowd acts as a collective individual. Transfer learning techniques are used on a convolutional neural network, pre-trained on low-level features using the well-known ImageNet extensive dataset of visual knowledge. The original sound clips are filtered and normalized in amplitude for a correct spectrogram generation, on which we fine-tune the domain-specific features. Experiments held on the finally trained Convolutional Neural Network show promising performances of the proposed model to classify the emotions of the crowd.


Author(s):  
P. Marzuki ◽  
A. R. Syafeeza ◽  
Y. C. Wong ◽  
N. A. Hamid ◽  
A. Nur Alisa ◽  
...  

This paper proposes an improved Convolutional Neural Network (CNN) algorithm approach for license plate recognition system. The main contribution of this work is on the methodology to determine the best model for four-layered CNN architecture that has been used as the recognition method. This is achieved by validating the best parameters of the enhanced Stochastic Diagonal Levenberg Marquardt (SDLM) learning algorithm and network size of CNN. Several preprocessing algorithms such as Sobel operator edge detection, morphological operation and connected component analysis have been used to localize the license plate, isolate and segment the characters respectively before feeding the input to CNN. It is found that the proposed model is superior when subjected to multi-scaling and variations of input patterns. As a result, the license plate preprocessing stage achieved 74.7% accuracy and CNN recognition stage achieved 94.6% accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7871
Author(s):  
Zhongliang Deng ◽  
Hang Qi ◽  
Yanxu Liu ◽  
Enwen Hu

The traditional signal of opportunity (SOP) positioning system is equipped with dedicated receivers for each type of signal to ensure continuous signal perception. However, it causes a low equipment resources utilization and energy waste. With increasing SOP types, problems become more serious. This paper proposes a new signal perception unit for SOP positioning systems. By extracting the perception function from the positioning system and operating independently, the system can flexibly schedule resources and reduce waste based on the perception results. Through time-frequency joint representation, time-frequency image can be obtained which provides more information for signal recognition, and is difficult for traditional single time/frequency-domain analysis. We also designed a convolutional neural network (CNN) for signal recognition and a negative learning method to correct the overfitting to noisy data. Finally, a prototype system was built using USRP and LabVIEW for a 2.4 GHz frequency band test. The results show that the system can effectively identify Wi-Fi, Bluetooth, and ZigBee signals at the same time, and verified the effectiveness of the proposed signal perception architecture. It can be further promoted to realize SOP perception in almost full frequency domain, and improve the integration and resource utilization efficiency of the SOP positioning system.


2021 ◽  
Vol 63 (4) ◽  
pp. 219-228
Author(s):  
Chuanyu Lu ◽  
Minghui Lu ◽  
Yiting Chen ◽  
Yongdong Pan

A helicopter propeller is a kind of multi-layered composite material bonding structure. Ensuring that composite structures are free from defects can reduce the risk of in-service failure and hence improve safety. As a common non-destructive testing (NDT) technology, ultrasonic testing is often used in the inspection of composite structures. However, a composite structure made of multiple thin-layer materials bonded together can cause a serious aliasing problem for echo signals when inspecting with ultrasound. In this study, the frequency-domain characteristics of an aliasing echo signal were analysed using the spectrum of the acoustic pressure reflection coefficient. Furthermore, the time-frequency joint analysis results of the echo signal were obtained using a continuous wavelet transform. Finally, the obtained time-frequency features of the echo signal were used to classify and image with a convolutional neural network (CNN). The results revealed that, as opposed to the direct imaging of the time- and frequency-domain features, the time-frequency wavelet map of a thin-walled multi-layered structure that was classified and imaged with a CNN exhibited greater clarity and better defect recognition ability. In addition, the training time of the CNN was 17 s and the classification accuracy of the verification set was high, reaching 97.8%.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1514
Author(s):  
Ali Aljofey ◽  
Qingshan Jiang ◽  
Qiang Qu ◽  
Mingqing Huang ◽  
Jean-Pierre Niyigena

Phishing is the easiest way to use cybercrime with the aim of enticing people to give accurate information such as account IDs, bank details, and passwords. This type of cyberattack is usually triggered by emails, instant messages, or phone calls. The existing anti-phishing techniques are mainly based on source code features, which require to scrape the content of web pages, and on third-party services which retard the classification process of phishing URLs. Although the machine learning techniques have lately been used to detect phishing, they require essential manual feature engineering and are not an expert at detecting emerging phishing offenses. Due to the recent rapid development of deep learning techniques, many deep learning-based methods have also been introduced to enhance the classification performance. In this paper, a fast deep learning-based solution model, which uses character-level convolutional neural network (CNN) for phishing detection based on the URL of the website, is proposed. The proposed model does not require the retrieval of target website content or the use of any third-party services. It captures information and sequential patterns of URL strings without requiring a prior knowledge about phishing, and then uses the sequential pattern features for fast classification of the actual URL. For evaluations, comparisons are provided between different traditional machine learning models and deep learning models using various feature sets such as hand-crafted, character embedding, character level TF-IDF, and character level count vectors features. According to the experiments, the proposed model achieved an accuracy of 95.02% on our dataset and an accuracy of 98.58%, 95.46%, and 95.22% on benchmark datasets which outperform the existing phishing URL models.


2021 ◽  
Author(s):  
Guofa Li ◽  
Yanbo Wang ◽  
Jialong He ◽  
Yongchao Huo

Abstract Tool wear during machining has a great influence on the quality of machined surface and dimensional accuracy. Tool wear monitoring is extremely important to improve machining efficiency and workpiece quality. Multidomain features (time domain, frequency domain and time-frequency domain) can accurately characterise the degree of tool wear. However, manual feature fusion is time consuming and prevents the improvement of monitoring accuracy. A new tool wear prediction method based on multidomain feature fusion by attention-based depth-wise separable convolutional neural network is proposed to solve these problems. In this method, multidomain features of cutting force and vibration signals are extracted and recombined into feature tensors. The proposed hypercomplex position encoding and high dimensional self-attention mechanism are used to calculate the new representation of input feature tensor, which emphasizes the tool wear sensitive information and suppresses large area background noise. The designed depth-wise separable convolutional neural network is used to adaptively extract high-level features that can characterize tool wear from the new representation, and the tool wear is predicted automatically. The proposed method is verified on three sets of tool run-to-failure data sets of three-flute ball nose cemented carbide tool in machining centre. Experimental results show that the prediction accuracy of the proposed method is remarkably higher than other state-of-art methods. Therefore, the proposed tool wear prediction method is beneficial to improve the prediction accuracy and provide effective guidance for decision making in processing.


Sign in / Sign up

Export Citation Format

Share Document