scholarly journals Pacing Electrocardiogram Detection With Memory-Based Autoencoder and Metric Learning

2021 ◽  
Vol 12 ◽  
Author(s):  
Zhaoyang Ge ◽  
Huiqing Cheng ◽  
Zhuang Tong ◽  
Lihong Yang ◽  
Bing Zhou ◽  
...  

Remote ECG diagnosis has been widely used in the clinical ECG workflow. Especially for patients with pacemaker, in the limited information of patient's medical history, doctors need to determine whether the patient is wearing a pacemaker and also diagnose other abnormalities. An automatic detection pacing ECG method can help cardiologists reduce the workload and the rates of misdiagnosis. In this paper, we propose a novel autoencoder framework that can detect the pacing ECG from the remote ECG. First, we design a memory module in the traditional autoencoder. The memory module is to record and query the typical features of the training pacing ECG type. The framework does not directly feed features of the encoder into the decoder but uses the features to retrieve the most relevant items in the memory module. In the training process, the memory items are updated to represent the latent features of the input pacing ECG. In the detection process, the reconstruction data of the decoder is obtained by the fusion features in the memory module. Therefore, the reconstructed data of the decoder tends to be close to the pacing ECG. Meanwhile, we introduce an objective function based on the idea of metric learning. In the context of pacing ECG detection, comparing the error of objective function of the input data and reconstructed data can be used as an indicator of detection. According to the objective function, if the input data does not belong to pacing ECG, the objective function may get a large error. Furthermore, we introduce a new database named the pacing ECG database including 800 patients with a total of 8,000 heartbeats. Experimental results demonstrate that our method achieves an average F1-score of 0.918. To further validate the generalization of the proposed method, we also experiment on a widely used MIT-BIH arrhythmia database.

2018 ◽  
Vol 140 (12) ◽  
Author(s):  
Khaldon T. Meselhy ◽  
G. Gary Wang

Reliability-based design optimization (RBDO) algorithms typically assume a designer's prior knowledge of the objective function along with its explicit mathematical formula and the probability distributions of random design variables. These assumptions may not be valid in many industrial cases where there is limited information on variable variability and the objective function is subjective without mathematical formula. A new methodology is developed in this research to model and solve problems with qualitative objective functions and limited information about random variables. Causal graphs and design structure matrix are used to capture designer's qualitative knowledge of the effects of design variables on the objective. Maximum entropy theory and Monte Carlo simulation are used to model random variables' variability and derive reliability constraint functions. A new optimization problem based on a meta-objective function and transformed deterministic constraints is formulated, which leads close to the optimum of the original mathematical RBDO problem. The developed algorithm is tested and validated with the Golinski speed reducer design case. The results show that the algorithm finds a near-optimal reliable design with less initial information and less computation effort as compared to other RBDO algorithms that assume full knowledge of the problem.


2020 ◽  
Vol 34 (07) ◽  
pp. 12984-12992 ◽  
Author(s):  
Wentian Zhao ◽  
Xinxiao Wu ◽  
Xiaoxun Zhang

Generating stylized captions for images is a challenging task since it requires not only describing the content of the image accurately but also expressing the desired linguistic style appropriately. In this paper, we propose MemCap, a novel stylized image captioning method that explicitly encodes the knowledge about linguistic styles with memory mechanism. Rather than relying heavily on a language model to capture style factors in existing methods, our method resorts to memorizing stylized elements learned from training corpus. Particularly, we design a memory module that comprises a set of embedding vectors for encoding style-related phrases in training corpus. To acquire the style-related phrases, we develop a sentence decomposing algorithm that splits a stylized sentence into a style-related part that reflects the linguistic style and a content-related part that contains the visual content. When generating captions, our MemCap first extracts content-relevant style knowledge from the memory module via an attention mechanism and then incorporates the extracted knowledge into a language model. Extensive experiments on two stylized image captioning datasets (SentiCap and FlickrStyle10K) demonstrate the effectiveness of our method.


Author(s):  
Zhedong Zheng ◽  
Yi Yang

This work focuses on the unsupervised scene adaptation problem of learning from both labeled source data and unlabeled target data. Existing approaches focus on minoring the inter-domain gap between the source and target domains. However, the intra-domain knowledge and inherent uncertainty learned by the network are under-explored. In this paper, we propose an orthogonal method, called memory regularization in vivo, to exploit the intra-domain knowledge and regularize the model training. Specifically, we refer to the segmentation model itself as the memory module, and minor the discrepancy of the two classifiers, i.e., the primary classifier and the auxiliary classifier, to reduce the prediction inconsistency. Without extra parameters, the proposed method is complementary to most existing domain adaptation methods and could generally improve the performance of existing methods. Albeit simple, we verify the effectiveness of memory regularization on two synthetic-to-real benchmarks: GTA5 → Cityscapes and SYNTHIA → Cityscapes, yielding +11.1% and +11.3% mIoU improvement over the baseline model, respectively. Besides, a similar +12.0% mIoU improvement is observed on the cross-city benchmark: Cityscapes → Oxford RobotCar.


2019 ◽  
Author(s):  
Bernd Porr ◽  
Luis Howell

AbstractThe R peak detection of an ECG signal is the basis of virtually any further processing and any error caused by this detection will propagate to further processing stages. Despite this, R peak detection algorithms and annotated databases often allow large error tolerances around 10%, masking any error introduced. In this paper we have revisited popular ECG R peak detection algorithms by applying sample precision error margins. For this purpose we have created a new open access ECG database with sample precision labelling of both standard Einthoven I, II, III leads and from a chest strap. 25 subjects were recorded and filmed while sitting, solving a maths test, operating a handbike, walking and jogging. Our results show that using an error margin with sample precision, common R peak detection algorithms perform much worse than previously reported. In addition, there are significant performance differences between detectors which can have detrimental effects on applications such as heartrate variability, thus leading to meaningless results.


Author(s):  
Александр Вячеславович Пролубников

В работе дается обзор подходов к решению задач дискретной оптимизации с интервальной целевой функцией. Эти подходы рассматриваются в общем контексте исследований оптимизационных задач с неопределенностями в постановках. Приводятся варианты концепций оптимальности решений для задач дискретной оптимизации с интервальной целевой функцией - робастные решения, множества решений, оптимальных по Парето, слабые и сильные оптимальные решения, объединенные множества решений и др. Оценивается предпочтительность выбора той или иной концепции оптимальности при решении задач и отмечаются ограничения для применения использующих их подходов Optimization problems with uncertainties in their input data have been investigated by many researchers in different directions. There are a lot of sources of the uncertainties in the input data for applied problems. Inaccurate measurements and variability of the parameters with time are some of such sources. The interval of possible values of uncertain parameter is the natural and the only possible way to represent the uncertainty for a wide share of applied problems. We consider discrete optimization problems with interval uncertainties in their objective functions. The purpose of the paper is to provide an overview of the investigations in this field. The overview is given in the overall context of the researches of optimization problems with uncertainties. We review the interval approaches for the discrete optimization problem with interval objective function. The approaches we consider operate with the interval values and are focused on obtaining possible solutions or certain sets of the solutions that are optimal according to some concepts of optimality that are used by the approaches. We consider the different concepts of optimality: robust solutions, the Pareto sets, weak and strong solutions, the united solution sets, the sets of possible approximate solutions that correspond to possible values of uncertain parameters. All the approaches we consider allow absence of information on probabilistic distribution on intervals of possible values of parameters, though some of them may use the information to evaluate the probabilities of possible solutions, the distribution on the interval of possible objective function values for the solutions, etc. We assess the possibilities and limitations of the considered approaches


1981 ◽  
Vol 45 (3) ◽  
pp. 17-37 ◽  
Author(s):  
Paul E. Green ◽  
J. Douglas Carroll ◽  
Stephen M. Goldberg

This paper describes some of the features of POSSE (Product Optimization and Selected Segment Evaluation), a general procedure for optimizing product/service designs in marketing research. The approach uses input data based on conjoint analysis methods. The output of consumer choice simulators is modeled by means of response surface techniques and optimized by different sets of procedures, depending upon the nature of the objective function.


Author(s):  
Baida Hamdan ◽  
Davood Zabihzadeh

Similarity/distance measures play a key role in many machine learning, pattern recognition, and data mining algorithms, which leads to the emergence of the metric learning field. Many metric learning algorithms learn a global distance function from data that satisfies the constraints of the problem. However, in many real-world datasets, where the discrimination power of features varies in the different regions of input space, a global metric is often unable to capture the complexity of the task. To address this challenge, local metric learning methods are proposed which learn multiple metrics across the different regions of the input space. Some advantages of these methods include high flexibility and learning a nonlinear mapping, but they typically achieve at the expense of higher time requirements and overfitting problems. To overcome these challenges, this research presents an online multiple metric learning framework. Each metric in the proposed framework is composed of a global and a local component learned simultaneously. Adding a global component to a local metric efficiently reduces the problem of overfitting. The proposed framework is also scalable with both sample size and the dimension of input data. To the best of our knowledge, this is the first local online similarity/distance learning framework based on Passive/Aggressive (PA). In addition, for scalability with the dimension of input data, Dual Random Projection (DRP) is extended for local online learning in the present work. It enables our methods to run efficiently on high-dimensional datasets while maintaining their predictive performance. The proposed framework provides a straightforward local extension to any global online similarity/distance learning algorithm based on PA. Experimental results on some challenging datasets from machine vision community confirm that the extended methods considerably enhance the performance of the related global ones without increasing the time complexity.


Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 552-563
Author(s):  
Scott C. Hornbostel

Predictive deconvolution filters are designed to remove as much predictable energy as possible from the input data. It is generally understood that temporally correlated geology can cause problems for these filters. It is perhaps less well appreciated that uncorrelated random noise can also severely affect filter performance. The root of these problems is in the objective function being minimized; in addition to minimizing predictable multiple energy, the filter is attempting to simultaneously minimize the temporally correlated geology and the random‐noise energy. Instead of minimizing the input trace energy, an alternative objective function for minimization can be defined that is the result of a linear operator acting on the input data. Ideally this alternative objective function contains only the targeted noise (e.g., multiples). The linear operator that creates this objective function is designated as the “noise‐optimized objective” (NOO) operator. The filter that minimizes this new objective function is the NOO filter. Useful NOO operators for multiple suppression are those that maximize multiple energy and/or minimize primary or random noise energy in the data. Examples of such linear operators include stacking, bandpass filtering, dip filtering, and muting or scaling. Simply scaling down the primary‐containing portion of the objective function can address the problematic removal of correlated geology. Stacking can also be a useful NOO operator. By minimizing the predictable energy on a stacked trace, the prestack filters are less affected by random noise. The NOO stacking method differs from a standard poststack filter design because the filters are designed to be applied prestack. Further, this method differs from a standard prestack prediction filter because it minimizes the predictable energy on the stacked trace. The standard prestack filter has reduced multiple suppression because the filter must compromise between minimizing the multiple energy and minimizing the random noise energy. Minimizing the impact of random noise can be quite important in prediction filtering. At a signal‐to‐random‐noise ratio of one, for example, half the multiple remains after filtering. This random noise‐related degradation might help to explain the common observation that prediction filters tend to leave multiple energy in the data. A time‐varying gap implementation of a stacking NOO filter addresses these random noise effects while also addressing data aperiodicity issues.


Sign in / Sign up

Export Citation Format

Share Document