scholarly journals The Hilbert Transform in Analysis of Uterine Contraction Activity

2015 ◽  
Vol 43 (1) ◽  
pp. 61-72 ◽  
Author(s):  
Marta Borowska ◽  
Ewelina Brzozowska ◽  
Edward Oczeretko

Abstract Prevention and early diagnosis of forthcoming preterm labor is of vital importance in preventing child mortality. To date, our understanding of the coordination of uterine contractions is incomplete. Among the many methods of recording uterine contractility, electrohysterography (EHG) – the recording of changes in electrical potential associated with contraction of the uterine muscle, seems to be the most important from a diagnostic point of view. There is some controversy regarding whether EHG may identify patients with a high risk of preterm delivery. There is a need to check various digital signal processing techniques to describe the recorded signals. The study of synchronization of multivariate signals is important from both a theoretical and a practical point of view. Application of the Hilbert transformation seems very promising.

2018 ◽  
Vol 8 (12) ◽  
pp. 2569 ◽  
Author(s):  
David Luengo ◽  
David Meltzer ◽  
Tom Trigano

The electrocardiogram (ECG) was the first biomedical signal for which digital signal processing techniques were extensively applied. By its own nature, the ECG is typically a sparse signal, composed of regular activations (QRS complexes and other waveforms, such as the P and T waves) and periods of inactivity (corresponding to isoelectric intervals, such as the PQ or ST segments), plus noise and interferences. In this work, we describe an efficient method to construct an overcomplete and multi-scale dictionary for sparse ECG representation using waveforms recorded from real-world patients. Unlike most existing methods (which require multiple alternative iterations of the dictionary learning and sparse representation stages), the proposed approach learns the dictionary first, and then applies a fast sparse inference algorithm to model the signal using the constructed dictionary. As a result, our method is much more efficient from a computational point of view than other existing algorithms, thus becoming amenable to dealing with long recordings from multiple patients. Regarding the dictionary construction, we located first all the QRS complexes in the training database, then we computed a single average waveform per patient, and finally we selected the most representative waveforms (using a correlation-based approach) as the basic atoms that were resampled to construct the multi-scale dictionary. Simulations on real-world records from Physionet’s PTB database show the good performance of the proposed approach.


Author(s):  
David Luengo ◽  
David Meltzer ◽  
Tom Trigano

The electrocardiogram (ECG) was the first biomedical signal where digital signal processing techniques were extensively applied. By its own nature, the ECG is typically a sparse signal, composed of regular activations (the QRS complexes and other waveforms, like the P and T waves) and periods of inactivity (corresponding to isoelectric intervals, like the PQ or ST segments), plus noise and interferences. In this work, we describe an efficient method to construct an overcomplete and multi-scale dictionary for sparse ECG representation using waveforms recorded from real-world patients. Unlike most existing methods (which require multiple alternative iterations of the dictionary learning and sparse representation stages), the proposed approach learns the dictionary first, and then applies an efficient sparse inference algorithm to model the signal using the learnt dictionary. As a result, our method is much more efficient from a computational point of view than other existing methods, thus becoming amenable to deal with long recordings from multiple patients. Regarding the dictionary construction, we locate first all the QRS complexes in the training database, then we compute a single average waveform per patient, and finally we select the most representative waveforms (using a correlation-based approach) as the basic atoms that will be resampled to construct the multi-scale dictionary. Simulations on real-world records from Physionet's PTB database show the good performance of the proposed approach.


1967 ◽  
Vol 7 (3) ◽  
pp. 416-420
Author(s):  
Arthur MacEwan

These books are numbers 4 and 5, respectively, in the series "Studies in the Economic Development of India". The two books are interesting complements to one another, both being concerned with the analysis of projects within national plan formulation. However, they treat different sorts of problems and do so on very different levels. Marglin's Public Investment Criteria is a short treatise on the problems of cost-benefit analysis in an Indian type economy, i.e., a mixed economy in which the government accepts a large planning responsibility. The book, which is wholely theoretical, explains the many criteria needed for evaluation of projects. The work is aimed at beginning students and government officials with some training in economics. It is a clear and interesting "introduction to the special branch of economics that concerns itself with systematic analysis of investment alternatives from the point of view of a government".


Morphology ◽  
2021 ◽  
Author(s):  
Rossella Varvara ◽  
Gabriella Lapesa ◽  
Sebastian Padó

AbstractWe present the results of a large-scale corpus-based comparison of two German event nominalization patterns: deverbal nouns in -ung (e.g., die Evaluierung, ‘the evaluation’) and nominal infinitives (e.g., das Evaluieren, ‘the evaluating’). Among the many available event nominalization patterns for German, we selected these two because they are both highly productive and challenging from the semantic point of view. Both patterns are known to keep a tight relation with the event denoted by the base verb, but with different nuances. Our study targets a better understanding of the differences in their semantic import.The key notion of our comparison is that of semantic transparency, and we propose a usage-based characterization of the relationship between derived nominals and their bases. Using methods from distributional semantics, we bring to bear two concrete measures of transparency which highlight different nuances: the first one, cosine, detects nominalizations which are semantically similar to their bases; the second one, distributional inclusion, detects nominalizations which are used in a subset of the contexts of the base verb. We find that only the inclusion measure helps in characterizing the difference between the two types of nominalizations, in relation with the traditionally considered variable of relative frequency (Hay, 2001). Finally, the distributional analysis allows us to frame our comparison in the broader coordinates of the inflection vs. derivation cline.


2001 ◽  
Vol 11 (4) ◽  
pp. 311-321
Author(s):  
DN Carmichael ◽  
Michael Lye

Heart failure has been defined in many ways and definitions change over time. The multiplicity of definitions reflect the paucity of our understanding of the primary underlying physiology of heart failure and the many diseases for which heart failure is the common end-point. Fundamentally, heart failure represents a failure of the heart to meet the body’s requirement for blood supply for whatever reason. It is thus a clinical syndrome with characteristic features – not a single disease in its own right. The syndrome includes symptoms and signs of organ underperfusion, fluid retention and neuroendocrine activation. The syndrome arises from a range of possible causes of which ischaemic heart disease is the commonest. From the point of view of a clinician, the underlying pathology will determine treatment options and prognosis. The extensive range of possible aetiologies present a diagnostic challenge both to correctly identify the syndrome amongst all other causes of dyspnoea and to identify the aetiology, allowing optimization of treatment.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Zhehuang Huang ◽  
Yidong Chen

Exon recognition is a fundamental task in bioinformatics to identify the exons of DNA sequence. Currently, exon recognition algorithms based on digital signal processing techniques have been widely used. Unfortunately, these methods require many calculations, resulting in low recognition efficiency. In order to overcome this limitation, a two-stage exon recognition model is proposed and implemented in this paper. There are three main works. Firstly, we use synergetic neural network to rapidly determine initial exon intervals. Secondly, adaptive sliding window is used to accurately discriminate the final exon intervals. Finally, parameter optimization based on artificial fish swarm algorithm is used to determine different species thresholds and corresponding adjustment parameters of adaptive windows. Experimental results show that the proposed model has better performance for exon recognition and provides a practical solution and a promising future for other recognition tasks.


Sign in / Sign up

Export Citation Format

Share Document