scholarly journals A Study on User Recognition Using the Generated Synthetic Electrocardiogram Signal

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1887
Author(s):  
Min-Gu Kim ◽  
Sung Bum Pan

Electrocardiogram (ECG) signals are time series data that are acquired by time change. A problem with these signals is that comparison data that have the same size as the registration data must be acquired every time. A network model of an auxiliary classifier based generative adversarial neural network that is capable of generating synthetic ECG signals is proposed to resolve the data size inconsistency problem. After constructing comparison data with various combinations of the real and generated synthetic ECG signal cycles, a user recognition experiment was performed by applying them to an ensemble network of parallel structure. Recognition performance of 98.5% was demonstrated when five cycles of real ECG signals were used. Moreover, 98.7% and 97% accuracies were provided when the first cycle of synthetic ECG signals and the fourth cycle of real ECG signals were repetitively used as the last cycle, respectively, in addition to the four cycles of real ECG. When two cycles of synthetic ECG signals were used with three cycles of real ECG signals, 97.2% accuracy was shown. When the last third cycle was repeatedly used with the three cycles of real ECG signals, the accuracy was 96%, which was 1.2% lower than the performance obtained while using the synthetic ECG. Therefore, even if the size of the registration data and that of the comparison data are not consistent, the generated synthetic ECG signals can be applied to a real life environment, because a high recognition performance is demonstrated when they are applied to an ensemble network of parallel structure.

2011 ◽  
Vol 14 (2) ◽  
pp. 71-79
Author(s):  
Anh Tuan Duong

Time series data occur in many real life applications, ranging from science and engineering to business. In many of these applications, searching through large time series database based on query sequence is often desirable. Such similarity-based retrieval is also the basic subroutine in several advanced time series data mining tasks such as clustering, classification, finding motifs, detecting anomaly patterns, rule discovery and visualization. Although several different approaches have been developed, most are based on the common premise of dimensionality reduction and spatial access methods. This survey gives an overview of recent research and shows how the methods fit into a general framework of feature extraction.


Today, with an enormous generation and availability of time series data and streaming data, there is an increasing need for an automatic analyzing architecture to get fast interpretations and results. One of the significant potentiality of streaming analytics is to train and model each stream with unsupervised Machine Learning (ML) algorithms to detect anomalous behaviors, fuzzy patterns, and accidents in real-time. If executed reliably, each anomaly detection can be highly valuable for the application. In this paper, we propose a dynamic threshold setting system denoted as Thresh-Learner, mainly for the Internet of Things (IoT) applications that require anomaly detection. The proposed model enables a wide range of real-life applications where there is a necessity to set up a dynamic threshold over the streaming data to avoid anomalies, accidents or sending alerts to distant monitoring stations. We took the major problem of anomalies and accidents in coal mines due to coal fires and explosions. This results in loss of life due to the lack of automated alarming systems. We propose Thresh-Learner, a general purpose implementation for setting dynamic thresholds. We illustrate it through the Smart Helmet for coal mine workers which seamlessly integrates monitoring, analyzing and dynamic thresholds using IoT and analysis on the cloud.


2020 ◽  
Vol 4 (2) ◽  
pp. 173-194
Author(s):  
Yiyan Zhang

Abstract While intermedia agenda-setting scholars have examined the process from a global perspective, trans-regional intermedia agenda setting, especially in non-western context, remains understudied. By analyzing the time-series data of news coverage on air pollution, a non-political topic, from online news media in mainland China, Hong Kong, and Taiwan from 2015 to 2018, this study revealed a triangular first-level agenda-setting relationship among the three regions and identified the changing agenda setters across years, which disproves the imperialistic stereotype that there is a one-way control from mainland China media. The study also revealed the significant yet unconventional moderating effect of the political stance of news organizations in the trans-regional information flow. This study contributes to the intermedia agenda-setting literature by introducing the method of controlling the real-life situation in the Granger Causality test and by showing that non-political issues can also be politicalized in the salience transferring process.


2019 ◽  
Vol 14 ◽  
pp. 155892501988346 ◽  
Author(s):  
Mine Seçkin ◽  
Ahmet Çağdaş Seçkin ◽  
Aysun Coşkun

Although textile production is heavily automation-based, it is viewed as a virgin area with regard to Industry 4.0. When the developments are integrated into the textile sector, efficiency is expected to increase. When data mining and machine learning studies are examined in textile sector, it is seen that there is a lack of data sharing related to production process in enterprises because of commercial concerns and confidentiality. In this study, a method is presented about how to simulate a production process and how to make regression from the time series data with machine learning. The simulation has been prepared for the annual production plan, and the corresponding faults based on the information received from textile glove enterprise and production data have been obtained. Data set has been applied to various machine learning methods within the scope of supervised learning to compare the learning performances. The errors that occur in the production process have been created using random parameters in the simulation. In order to verify the hypothesis that the errors may be forecast, various machine learning algorithms have been trained using data set in the form of time series. The variable showing the number of faulty products could be forecast very successfully. When forecasting the faulty product parameter, the random forest algorithm has demonstrated the highest success. As these error values have given high accuracy even in a simulation that works with uniformly distributed random parameters, highly accurate forecasts can be made in real-life applications as well.


2021 ◽  
pp. 1-20
Author(s):  
Fabian Kai-Dietrich Noering ◽  
Yannik Schroeder ◽  
Konstantin Jonas ◽  
Frank Klawonn

In technical systems the analysis of similar situations is a promising technique to gain information about the system’s state, its health or wearing. Very often, situations cannot be defined but need to be discovered as recurrent patterns within time series data of the system under consideration. This paper addresses the assessment of different approaches to discover frequent variable-length patterns in time series. Because of the success of artificial neural networks (NN) in various research fields, a special issue of this work is the applicability of NNs to the problem of pattern discovery in time series. Therefore we applied and adapted a Convolutional Autoencoder and compared it to classical nonlearning approaches based on Dynamic Time Warping, based on time series discretization as well as based on the Matrix Profile. These nonlearning approaches have also been adapted, to fulfill our requirements like the discovery of potentially time scaled patterns from noisy time series. We showed the performance (quality, computing time, effort of parametrization) of those approaches in an extensive test with synthetic data sets. Additionally the transferability to other data sets is tested by using real life vehicle data. We demonstrated the ability of Convolutional Autoencoders to discover patterns in an unsupervised way. Furthermore the tests showed, that the Autoencoder is able to discover patterns with a similar quality like classical nonlearning approaches.


A vast availability of location based user data which is generated everyday whether it is GPS data from online cabs, or weather time series data, is essential in many ways to the user and has been applied to many real life applications such as location targeted-advertising, recommendation systems, crime-rate detection, home trajectory analysis etc. In order to analyze this data and use it to fruitfulness a vast majority of prediction models have been proposed and utilized over the years. A next location prediction model is a model that uses this data and can be designed as a combination of two or more models and techniques, but these have their own pros and cons. The aim of this document is to analyze and compare the various machine learning models and related experiments that can be applied for better location prediction algorithms in the near future. The paper is organized in a way so as to give readers insights and other noteworthy points and inferences from the papers surveyed. A summary table has been presented to get a glimpse of the methods in depth and our added inferences along with the data-sets analyzed.


2021 ◽  
Vol 27 (5) ◽  
pp. 1250-1279
Author(s):  
Yong Qin ◽  
Zeshui Xu ◽  
Xinxin Wang ◽  
Marinko Škare ◽  
Małgorzata Porada-Rochoń

This work explores the relationship between financial cycles in the economy and in economic research. To this aim, we take China as an empirical example, and an intuitive bibliometric analysis of selected terms concerning financial cycles in economic research is performed first. Both in the economy and in economic research, we then conduct singular spectrum analysis to further isolate and describe the specific length and amplitude of financial cycles for China based on quarterly time-series data. Finally, according to the estimated cycles that detrended by Hodrick-Prescott filter for financial and bibliometric variables, the Granger causality test scrutinizes the results of the first two steps. Moreover, a time-varying parameter vector autoregression model is estimated to quantitatively investigate the time-varying interaction between financial and bibliometric variables. Our study shows that financial cycles have a strong effect on the developments in the financial-related literature. In particular, the 2008 global financial crisis’s impulse intensity is significantly higher than in other periods. Surprisingly, discussions on financial cycles in the literature also have an impact on financial activities in real life. These findings contribute to nascent work on the patterns in financial cycles, thus providing a new and effective insight on the interpretation of financial activities.


Symmetry ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 912 ◽  
Author(s):  
Yi ◽  
Bu ◽  
Kim

The concept of trend in data and a novel neural network method for the forecasting ofupcoming time-series data are proposed in this paper. The proposed method extracts two datasets—the trend and the remainder—resulting in two separate learning sets for training. This methodworks sufficiently, even when only using a simple recurrent neural network (RNN). The proposedscheme is demonstrated to achieve better performance in selected real-life examples, compared toother averaging-based statistical forecast methods and other recurrent methods, such as longshort-term memory (LSTM).


Author(s):  
Chandrachur Bhattacharya ◽  
Asok Ray

Abstract One of the pertinent problems in decision and control of dynamical systems is to identify the current operational regime of the physical process under consideration. To this end, there has been an upsurge in (data-driven) machine learning methods, such as symbolic time series analysis, hidden Markov modeling, and artificial neural networks, which often rely on some form of supervised learning based on preclassified data to construct the classifier. However, this approach may not be adequate for dynamical systems with a variety of operational regimes and possible anomalous/failure conditions. To address this issue, the technical brief proposes a methodology, built upon the concept of symbolic time series analysis, wherein the classifier learns to discover the patterns so that the algorithms can train themselves online while simultaneously functioning as a classifier. The efficacy of the methodology is demonstrated on time series of: (i) synthetic data from an unforced Van der Pol equation and (ii) pressure oscillation data from an experimental Rijke tube apparatus that emulates the thermoacoustics in real-life combustors where the process dynamics undergoes changes from the stable regime to an unstable regime and vice versa via transition to transient regimes. The underlying algorithms are capable of accurately learning and capturing the various regimes online in a (primarily) unsupervised manner.


2015 ◽  
Vol 3 (1) ◽  
pp. 37-57 ◽  
Author(s):  
ERNST WIT ◽  
ANTONINO ABBRUZZO

AbstractDynamic network models describe many important scientific processes, from cell biology and epidemiology to sociology and finance. Estimating dynamic networks from noisy time series data is a difficult task since the number of components involved in the system is very large. As a result, the number of parameters to be estimated is typically larger than the number of observations. However, a characteristic of many real life networks is that they are sparse. For example, the molecular structure of genes make interactions with other components a highly-structured and, therefore, a sparse process. Until now, the literature has focused on static networks, which lack specific temporal interpretations.We propose a flexible collection of ANOVA-like dynamic network models, where the user can select specific time dynamics, known presence or absence of links, and a particular autoregressive structure. We use undirected graphical models with block equality constraints on the parameters. This reduces the number of parameters, increases the accuracy of the estimates and makes interpretation of the results more relevant. We illustrate the flexibility of the method on both synthetic and real data.


Sign in / Sign up

Export Citation Format

Share Document