scholarly journals ECG Biometrics Using Deep Learning and Relative Score Threshold Classification

Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4078
Author(s):  
David Belo ◽  
Nuno Bento ◽  
Hugo Silva ◽  
Ana Fred ◽  
Hugo Gamboa

The field of biometrics is a pattern recognition problem, where the individual traits are coded, registered, and compared with other database records. Due to the difficulties in reproducing Electrocardiograms (ECG), their usage has been emerging in the biometric field for more secure applications. Inspired by the high performance shown by Deep Neural Networks (DNN) and to mitigate the intra-variability challenges displayed by the ECG of each individual, this work proposes two architectures to improve current results in both identification (finding the registered person from a sample) and authentication (prove that the person is whom it claims) processes: Temporal Convolutional Neural Network (TCNN) and Recurrent Neural Network (RNN). Each architecture produces a similarity score, based on the prediction error of the former and the logits given by the last, and fed to the same classifier, the Relative Score Threshold Classifier (RSTC).The robustness and applicability of these architectures were trained and tested on public databases used by literature in this context: Fantasia, MIT-BIH, and CYBHi databases. Results show that overall the TCNN outperforms the RNN achieving almost 100%, 96%, and 90% accuracy, respectively, for identification and 0.0%, 0.1%, and 2.2% equal error rate (EER) for authentication processes. When comparing to previous work, both architectures reached results beyond the state-of-the-art. Nevertheless, the improvement of these techniques, such as enriching training with extra varied data and transfer learning, may provide more robust systems with a reduced time required for validation.

2020 ◽  
Author(s):  
David Belo ◽  
Nuno Bento ◽  
Hugo Silva ◽  
Ana Fred ◽  
Hugo Gamboa

Abstract Background: Biometric Systems (BS) are based on a pattern recognition problem where the individual traits of a person are coded and compared. The Electrocardiogram (ECG) as a biometric emerged, as it fulfills the requirements of a BS. Methods: Inspired by the high performance shown by Deep Neural Networks(DNN), this work proposes two architectures to improve current results in both identification and authentication: Temporal Convolutional Neural Network (TCNN) and Recurrent Neural Network (RNN). The last two results weresubmitted to a simple classifier, which exploits the error of prediction of theformer and the scores given by the last. Results: The robustness and applicability of these architectures were tested onFantasia, MIT-BIH and CYBHi databases. The TCNN outperforms the RNNachieving 100%, 96% and 90% of accuracy, respectively, for identification and 0.0%, 0.1% and 2.2% equal error rate for authentication. Conclusions: When comparing to previous work, both architectures reachedresults beyond the state-of-the-art. Even though this experience was a success,the inclusion of these techniques may provide a system that could reduce thevalidation acquisition time.


2021 ◽  
Vol 11 (15) ◽  
pp. 7051
Author(s):  
Maximilian Siener ◽  
Irene Faber ◽  
Andreas Hohmann

(1) Background: The search for talented young athletes is an important element of top-class sport. While performance profiles and suitable test tasks for talent identification have already been extensively investigated, there are few studies on statistical prediction methods for talent identification. Therefore, this long-term study examined the prognostic validity of four talent prediction methods. (2) Methods: Tennis players (N = 174; n♀ = 62 and n♂ = 112) at the age of eight years (U9) were examined using five physical fitness tests and four motor competence tests. Based on the test results, four predictions regarding the individual future performance were made for each participant using a linear recommendation score, a logistic regression, a discriminant analysis, and a neural network. These forecasts were then compared with the athletes’ achieved performance success at least four years later (U13‒U18). (3) Results: All four prediction methods showed a medium-to-high prognostic validity with respect to their forecasts. Their values of relative improvement over chance ranged from 0.447 (logistic regression) to 0.654 (tennis recommendation score). (4) Conclusions: However, the best results are only obtained by combining the non-linear method (neural network) with one of the linear methods. Nevertheless, 18.75% of later high-performance tennis players could not be predicted using any of the methods.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1939
Author(s):  
Jun Wei Chen ◽  
Xanno K. Sigalingging ◽  
Jenq-Shiou Leu ◽  
Jun-Ichi Takada

In recent years, Chinese has become one of the most popular languages globally. The demand for automatic Chinese sentence correction has gradually increased. This research can be adopted to Chinese language learning to reduce the cost of learning and feedback time, and help writers check for wrong words. The traditional way to do Chinese sentence correction is to check if the word exists in the predefined dictionary. However, this kind of method cannot deal with semantic error. As deep learning becomes popular, an artificial neural network can be applied to understand the sentence’s context to correct the semantic error. However, there are still many issues that need to be discussed. For example, the accuracy and the computation time required to correct a sentence are still lacking, so maybe it is still not the time to adopt the deep learning based Chinese sentence correction system to large-scale commercial applications. Our goal is to obtain a model with better accuracy and computation time. Combining recurrent neural network and Bidirectional Encoder Representations from Transformers (BERT), a recently popular model, known for its high performance and slow inference speed, we introduce a hybrid model which can be applied to Chinese sentence correction, improving the accuracy and also the inference speed. Among the results, BERT-GRU has obtained the highest BLEU Score in all experiments. The inference speed of the transformer-based original model can be improved by 1131% in beam search decoding in the 128-word experiment, and greedy decoding can also be improved by 452%. The longer the sequence, the larger the improvement.


2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


2014 ◽  
Vol 907 ◽  
pp. 139-149 ◽  
Author(s):  
Eckart Uhlmann ◽  
Florian Heitmüller

In gas turbines and turbo jet engines, high performance materials such as nickel-based alloys are widely used for blades and vanes. In the case of repair, finishing of complex turbine blades made of high performance materials is carried out predominantly manually. The repair process is therefore quite time consuming. And the costs of presently available repair strategies, especially for integrated parts, are high, due to the individual process planning and great amount of manually performed work steps. Moreover, there are severe risks of partial damage during manually conducted repair. All that leads to the fact that economy of scale effects remain widely unused for repair tasks, although the piece number of components to be repaired is increasing significantly. In the future, a persistent automation of the repair process chain should be achieved by developing adaptive robot assisted finishing strategies. The goal of this research is to use the automation potential for repair tasks by developing a technology that enables industrial robots to re-contour turbine blades via force controlled belt grinding.


Author(s):  
Alice Scavarda ◽  
Giuseppe Costa ◽  
Franca Beccaria

Within the past several years, a considerable body of research on adherence to diabetes regimen has emerged in public health. However, the focus of the vast majority of these studies has been on the individual traits and attitudes affecting adherence. Still little is known on the role of the social and physical context in supporting or hindering diabetes self-management, particularly from a qualitative standpoint. To address these limitations, this paper presents the findings of a Photovoice study on a sample of 10 type 2 diabetic older adults living in a deprived neighbourhood of an Italian city. The findings reveal that the possibility to engage in diet, exercise and blood sugar monitoring seems to be more affected by physical and social elements of the respondents’ environment than by the interviewees’ beliefs and attitudes. Both environmental barriers and social isolation emerge as barriers to lifestyle changes and self-care activities related to blood sugar monitoring. The predominance of bonding social capital, the scant level of trust and the negative perception of local health services result in a low level of social cohesion, a limited circulation of health information on diabetes management and, consequently, in poor health outcomes.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-28
Author(s):  
Goran Flegar ◽  
Hartwig Anzt ◽  
Terry Cojean ◽  
Enrique S. Quintana-Ortí

The use of mixed precision in numerical algorithms is a promising strategy for accelerating scientific applications. In particular, the adoption of specialized hardware and data formats for low-precision arithmetic in high-end GPUs (graphics processing units) has motivated numerous efforts aiming at carefully reducing the working precision in order to speed up the computations. For algorithms whose performance is bound by the memory bandwidth, the idea of compressing its data before (and after) memory accesses has received considerable attention. One idea is to store an approximate operator–like a preconditioner–in lower than working precision hopefully without impacting the algorithm output. We realize the first high-performance implementation of an adaptive precision block-Jacobi preconditioner which selects the precision format used to store the preconditioner data on-the-fly, taking into account the numerical properties of the individual preconditioner blocks. We implement the adaptive block-Jacobi preconditioner as production-ready functionality in the Ginkgo linear algebra library, considering not only the precision formats that are part of the IEEE standard, but also customized formats which optimize the length of the exponent and significand to the characteristics of the preconditioner blocks. Experiments run on a state-of-the-art GPU accelerator show that our implementation offers attractive runtime savings.


2021 ◽  
Author(s):  
Jifa Zhang ◽  
Yuan Jiang ◽  
Leah F Easterling ◽  
Anton Anster ◽  
Wanru Li ◽  
...  

Organosolv treatment is an efficient and environmentally friendly process to degrade lignin into small compounds. The capability of characterizing the individual compounds in the complex mixtures formed upon organosolv treatment...


Agriculture ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 651
Author(s):  
Shengyi Zhao ◽  
Yun Peng ◽  
Jizhan Liu ◽  
Shuo Wu

Crop disease diagnosis is of great significance to crop yield and agricultural production. Deep learning methods have become the main research direction to solve the diagnosis of crop diseases. This paper proposed a deep convolutional neural network that integrates an attention mechanism, which can better adapt to the diagnosis of a variety of tomato leaf diseases. The network structure mainly includes residual blocks and attention extraction modules. The model can accurately extract complex features of various diseases. Extensive comparative experiment results show that the proposed model achieves the average identification accuracy of 96.81% on the tomato leaf diseases dataset. It proves that the model has significant advantages in terms of network complexity and real-time performance compared with other models. Moreover, through the model comparison experiment on the grape leaf diseases public dataset, the proposed model also achieves better results, and the average identification accuracy of 99.24%. It is certified that add the attention module can more accurately extract the complex features of a variety of diseases and has fewer parameters. The proposed model provides a high-performance solution for crop diagnosis under the real agricultural environment.


Sign in / Sign up

Export Citation Format

Share Document