A De-Noising Method for Non-Stationary Signal

2011 ◽  
Vol 148-149 ◽  
pp. 309-312
Author(s):  
Yan Hai Shang

The paper proposed a de-noising method for non-stationary signals received by a small-size antenna array. By taking into consideration of the correlation of signals received by antennae which compose the antenna array, we could improve greatly the discovery probability of signal coefficients in time-frequency domain through taking advantage of the method of integration for signal detection. Meanwhile, under the assumption of Gaussian noise environment, the probability density functions of noise and signal after integration were also presented. In order to extract the coefficients for signal and reduce the Type I error further, an algorithm based False Discovery Rate (FDR) was put forward. Finally, a comparison between the detection performances before and after integration was made: under same rate of Type I error, the detection performance of signal after integration is improved significantly. And the effectiveness of the method was showed by experimental results as well.

2021 ◽  
Vol 34 (1) ◽  
pp. 79-88
Author(s):  
Dean Radin ◽  
Helané Wahbeh ◽  
Leena Michel ◽  
Arnaud Delorme

An experiment we conducted from 2012 to 2013, which had not been previously reported, was designed to explore possible psychophysical effects resulting from the interaction of a human mind with a quantum system. Participants focused their attention toward or away from the slits in a double-slit optical system to see if the interference pattern would be affected. Data were collected from 25 people in individual half-hour sessions; each person repeated the test ten times for a total of 250 planned sessions. “Sham” sessions designed to mimic the experimental sessions without observers present were run immediately before and after as controls. Based on the planned analysis, no evidence for a psychophysical effect was found. Because this experiment differed in two essential ways from similar, previously reported double-slit experiments, two exploratory analyses were developed, one based on a simple spectral analysis of the interference pattern and the other based on fringe visibility. For the experimental data, the outcome supported a pattern of results predicted by a causal psychophysical effect, with the spectral metric resulting in a 3.4 sigma effect (p = 0.0003), and the fringe visibility metric resulting in 7 of 22 fringes tested above 2.3 sigma after adjustment for type I error inflation, with one of those fringes at 4.3 sigma above chance (p = 0.00001). The same analyses applied to the sham data showed uniformly null outcomes. Other analyses exploring the potential that these results were due to mundane artifacts, such as fluctuations in temperature or vibration, showed no evidence of such influences. Future studies using the same protocols and analytical methods will be required to determine if these exploratory results are idiosyncratic or reflect a genuine psychophysical influence.


2021 ◽  
Author(s):  
Behnaz Ghoraani

Most of the real-world signals in nature are non-stationary, i.e., their statistics are time variant. Extracting the time-varying frequency characteristics of a signal is very important in understanding the signal better, which could be of immense use in various applications such as pattern recognition and automated-decision making systems. In order to extract meaningful time-frequency (TF) features, a joint TF analysis is required. The proposed work is an attempt to develop a generalized TF analysis methodology that exploits the benefits of TF distribution (TFD) in pattern classification systems as related to discriminant feature detection and classification. Our objective is to introduce a unique and efficient way of performing non-stationary signal analysis using adaptive and discriminant TF techniques. To fulfill this objective, in the first point, we build a novel TF matrix (TFM) decomposition that increases the effectiveness of segmentation in real-world signals. Instantaneous and unique features are extracted from each segment such that they successfully represent joint TF structure of the signal. In the second point, based on the above technique, two unique and novel discriminant TF analysis methods are proposed to perform an improved and discriminant feature selection of any non-stationary signals. The first approach is a new machine learning method that identifies the clusters of the discriminant features to compute the presence of the discriminative pattern in any given signal, and classify them accordingly. The second approach is a discriminant TFM (DTFM) framework, which is a combination of TFM decomposition and the discriminant clustering techniques. The developed DTFM analysis automatically identifies the differences between different classes as the distinguishing structure, and uses the identified structure to accurately classify and locate the discriminant structure in the signal. The theoretical properties of the proposed approaches pertaining to pattern recognition and detection are examined in this dissertation. The extracted TF features provide strong and successful characterization and classification of real and synthetic non-stationary signals. The proposed TF techniques facilitate the adaptation of TF quantification to any feature detection technique in automating the identification process of discriminatory TF features, and can find applications in many different fields including biomedical and multimedia signal processing.


2021 ◽  
Author(s):  
Marko Njirjak ◽  
Erik Otović ◽  
Dario Jozinović ◽  
Jonatan Lerga ◽  
Goran Mauša ◽  
...  

<p>The analysis of non-stationary signals is often performed on raw waveform data or on Fourier transformations of those data, i.e., spectrograms. However, the possibility of alternative time-frequency representations being more informative than spectrograms or the original data remains unstudied. In this study, we tested if alternative time-frequency representations could be more informative for machine learning classification of seismic signals. This hypothesis was assessed by training three well-established convolutional neural networks, using nine different time-frequency representations, to classify seismic waveforms as earthquake or noise. The results were compared to the base model, which was trained on the raw waveform data. The signals used in the experiment were seismogram instances from the LEN-DB seismological dataset (Magrini et al. 2020). The results demonstrate that Pseudo Wigner-Ville and Wigner-Ville time-frequency representations yield significantly better results than the base model, while Margenau-Hill performs significantly worse (P < .01). Interestingly, the spectrogram, which is often used in non-stationary signal analysis, did not yield statistically significant improvements. This research could have a notable impact in the field of seismology because the data that were previously hidden in the seismic noise are now classified more accurately. Moreover, the results might suggest that alternative time-frequency representations could be used in other fields which use non-stationary time series to extract more valuable information from the original data. The potential fields encompass different fields of geophysics, speech recognition, EEG and ECG signals, gravitational waves and so on. This, however, requires further research.</p>


Author(s):  
Ewa Świercz

Classification in the Gabor time-frequency domain of non-stationary signals embedded in heavy noise with unknown statistical distributionA new supervised classification algorithm of a heavily distorted pattern (shape) obtained from noisy observations of nonstationary signals is proposed in the paper. Based on the Gabor transform of 1-D non-stationary signals, 2-D shapes of signals are formulated and the classification formula is developed using the pattern matching idea, which is the simplest case of a pattern recognition task. In the pattern matching problem, where a set of known patterns creates predefined classes, classification relies on assigning the examined pattern to one of the classes. Classical formulation of a Bayes decision rule requiresa prioriknowledge about statistical features characterising each class, which are rarely known in practice. In the proposed algorithm, the necessity of the statistical approach is avoided, especially since the probability distribution of noise is unknown. In the algorithm, the concept of discriminant functions, represented by Frobenius inner products, is used. The classification rule relies on the choice of the class corresponding to themaxdiscriminant function. Computer simulation results are given to demonstrate the effectiveness of the new classification algorithm. It is shown that the proposed approach is able to correctly classify signals which are embedded in noise with a very low SNR ratio. One of the goals here is to develop a pattern recognition algorithm as the best possible way to automatically make decisions. All simulations have been performed in Matlab. The proposed algorithm can be applied to non-stationary frequency modulated signal classification and non-stationary signal recognition.


2015 ◽  
Vol 32 (6) ◽  
pp. 850-858 ◽  
Author(s):  
Sangjin Kim ◽  
Paul Schliekelman

Abstract Motivation: The advent of high throughput data has led to a massive increase in the number of hypothesis tests conducted in many types of biological studies and a concomitant increase in stringency of significance thresholds. Filtering methods, which use independent information to eliminate less promising tests and thus reduce multiple testing, have been widely and successfully applied. However, key questions remain about how to best apply them: When is filtering beneficial and when is it detrimental? How good does the independent information need to be in order for filtering to be effective? How should one choose the filter cutoff that separates tests that pass the filter from those that don’t? Result: We quantify the effect of the quality of the filter information, the filter cutoff and other factors on the effectiveness of the filter and show a number of results: If the filter has a high probability (e.g. 70%) of ranking true positive features highly (e.g. top 10%), then filtering can lead to dramatic increase (e.g. 10-fold) in discovery probability when there is high redundancy in information between hypothesis tests. Filtering is less effective when there is low redundancy between hypothesis tests and its benefit decreases rapidly as the quality of the filter information decreases. Furthermore, the outcome is highly dependent on the choice of filter cutoff. Choosing the cutoff without reference to the data will often lead to a large loss in discovery probability. However, naïve optimization of the cutoff using the data will lead to inflated type I error. We introduce a data-based method for choosing the cutoff that maintains control of the family-wise error rate via a correction factor to the significance threshold. Application of this approach offers as much as a several-fold advantage in discovery probability relative to no filtering, while maintaining type I error control. We also introduce a closely related method of P-value weighting that further improves performance. Availability and implementation: R code for calculating the correction factor is available at http://www.stat.uga.edu/people/faculty/paul-schliekelman. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.


2021 ◽  
Author(s):  
Ye Yue ◽  
Yijuan Hu

Abstract Background: Understanding whether and which microbes played a mediating role between an exposure and a disease outcome are essential for researchers to develop clinical interventions to treat the disease by modulating the microbes. Existing methods for mediation analysis of the microbiome are often limited to a global test of community-level mediation or selection of mediating microbes without control of the false discovery rate (FDR). Further, while the null hypothesis of no mediation at each microbe is a composite null that consists of three types of null (no exposure-microbe association, no microbe-outcome association given the exposure, or neither), most existing methods for the global test such as MedTest and MODIMA treat the microbes as if they are all under the same type of null. Results: We propose a new approach based on inverse regression that regresses the (possibly transformed) relative abundance of each taxon on the exposure and the exposure-adjusted outcome to assess the exposure-taxon and taxon-outcome associations simultaneously. Then the association p-values are used to test mediation at both the community and individual taxon levels. This approach fits nicely into our Linear Decomposition Model (LDM) framework, so our new method is implemented in the LDM and enjoys all the features of the LDM, i.e., allowing an arbitrary number of taxa to be tested, supporting continuous, discrete, or multivariate exposures and outcomes as well as adjustment of confounding covariates, accommodating clustered data, and offering analysis at the relative abundance or presence-absence scale. We refer to this new method as LDM-med. Using extensive simulations, we showed that LDM-med always controlled the type I error of the global test and had compelling power over existing methods; LDM-med always preserved the FDR of testing individual taxa and had much better sensitivity than alternative approaches. In contrast, MedTest and MODIMA had severely inflated type I error when different taxa were under different types of null. The flexibility of LDM-med for a variety of mediation analyses is illustrated by the application to a murine microbiome dataset, which identified a plausible mediator.Conclusions: Inverse regression coupled with the LDM is a strategy that performs well and is capable of handling mediation analysis in a wide variety of microbiome studies.


10.14311/1654 ◽  
2012 ◽  
Vol 52 (5) ◽  
Author(s):  
Václav Turoň

This paper deals with the new time-frequency Short-Time Approximated Discrete Zolotarev Transform (STADZT), which is based on symmetrical Zolotarev polynomials. Due to the special properties of these polynomials, STADZT can be used for spectral analysis of stationary and non-stationary signals with the better time and frequency resolution than the widely used Short-Time Fourier Transform (STFT). This paper describes the parameters of STADZT that have the main influence on its properties and behaviour. The selected parameters include the shape and length of the segmentation window, and the segmentation overlap. Because STADZT is very similar to STFT, the paper includes a comparison of the spectral analysis of a non-stationary signal created by STADZT and by STFT with various settings of the parameters.


2001 ◽  
Vol 28 (6) ◽  
pp. 666-679 ◽  
Author(s):  
David M. Murray ◽  
Glenn A. Phillips ◽  
Amanda S. Birnbaum ◽  
Leslie A. Lytle

This article presents the first estimates of school-level intraclass correlation for dietary measures based on data from the Teens Eating for Energy and Nutrition at School study. This study involves 3,878 seventh graders from 16 middle schools from Minneapolis–St. Paul, Minnesota. The sample was 66.8% White, 11.2% Black, and 7.0% Asian; 48.8% of the sample was female. Typical fruit and vegetable intake was assessed with a modified version of the Behavior Risk Factor Surveillance System questionnaire. Twenty-four-hour dietary recalls were conducted by nutritionists using the Minnesota Nutrition Data System. Mixed-model regression methods were used to estimate variance components for school and residual error, both before and after adjustment for demographic factors. School-level intraclass correlations were large enough, if ignored, to substantially inflate the Type I error rate in an analysis of treatment effects. The authors show how to use the estimates to determine sample size requirements for future studies.


2019 ◽  
Vol 16 (2) ◽  
pp. 73-82
Author(s):  
I. A. ADELEKE ◽  
A. O. ADEYEMI ◽  
E. E.E. AKARAWAK

Multiple testing is associated with simultaneous testing of many hypotheses, and frequently calls for adjusting level of significance in some way that the probability of observing at least one significant result due to chance remains below the desired significance levels. This study developed a Binomial Model Approximations (BMA) method as an alternative to addressing the multiplicity problem associated with testing more than one hypothesis at a time. The proposed method has demonstrated capacity for controlling Type I Error Rate as sample size increases when compared with the existing Bonferroni and False Discovery Rate (FDR).      


Sign in / Sign up

Export Citation Format

Share Document