EXACT WAVE‐SHAPING WITH A TIME‐DOMAIN DIGITAL FILTER OF FINITE LENGTH

Geophysics ◽  
1976 ◽  
Vol 41 (4) ◽  
pp. 659-672 ◽  
Author(s):  
R. F. Mereu

When the signs of alternate terms of a symmetric discrete time series are reversed and the newly created series is then convolved with the original series, the resultant time‐series will have alternate values equal to zero. This property of symmetric functions may be exploited to design a new deconvolution and wave‐shaping time‐domain filter which is capable of transforming a given wavelet into an output made up of a sequence of spikes separated by zeros, or a sequence of wavelets, whose shapes are identical to that of any desired wavelet. In its design, no Z-transform polynomials are factored or divided and no equations are solved. The weights are derived entirely in the time domain from a series of successively derived subfilters ([Formula: see text], [Formula: see text], [Formula: see text] ⋯ [Formula: see text]) which, when convolved with the original wavelet, creates the spike sequence output. These subfilters may be conveniently grouped into a symmetric component which is derived from the autocorrelation function, a component which depends upon the characteristics of the original wavelet and a component which depends upon the desired wavelet. The number of zeros separating the spike outputs may be controlled by increasing the number of sub‐filters N according to the formula [Formula: see text]. The Wiener filter is an optimum filter in the least‐squares sense but its errors occur across the output. The new filter is an optimum filter in an “error‐distribution” sense. Its errors are in reality the noncentral spikes of the spike sequence. By choosing the length properly, the errors may be moved away from the region of interest leaving that region effectively “error‐free”. A limitation to this procedure is the computational round‐off error which increases as the filter length is increased. In a series of experiments with various types of wavelets it was found that the spike position always occurs at the center of the filter, with the anticipation and memory components automatically falling into place. A very important property of the filter is the fact that the input parameters required for its design are identical to those needed for the normal equations of the Wiener filter. Initial tests with a noisy time‐series showed that the new filter could be effectively employed using the statistical properties of the noise in the same manner that the Wiener filter is applied.

2008 ◽  
Vol 25 (4) ◽  
pp. 534-546 ◽  
Author(s):  
Anthony Arguez ◽  
Peng Yu ◽  
James J. O’Brien

Abstract Time series filtering (e.g., smoothing) can be done in the spectral domain without loss of endpoints. However, filtering is commonly performed in the time domain using convolutions, resulting in lost points near the series endpoints. Multiple incarnations of a least squares minimization approach are developed that retain the endpoint intervals that are normally discarded due to filtering with convolutions in the time domain. The techniques minimize the errors between the predetermined frequency response function (FRF)—a fundamental property of all filters—of interior points with FRFs that are to be determined for each position in the endpoint zone. The least squares techniques are differentiated by their constraints: 1) unconstrained, 2) equal-mean constraint, and 3) an equal-variance constraint. The equal-mean constraint forces the new weights to sum up to the same value as the predetermined weights. The equal-variance constraint forces the new weights to be such that, after convolved with the input values, the expected time series variance is preserved. The three least squares methods are each tested under three separate filtering scenarios [involving Arctic Oscillation (AO), Madden–Julian oscillation (MJO), and El Niño–Southern Oscillation (ENSO) time series] and compared to each other as well as to the spectral filtering method—the standard of comparison. The results indicate that all four methods (including the spectral method) possess skill at determining suitable endpoints estimates. However, both the unconstrained and equal-mean schemes exhibit bias toward zero near the terminal ends due to problems with appropriating variance. The equal-variance method does not show evidence of this attribute and was never the worst performer. The equal-variance method showed great promise in the ENSO project involving a 5-month running mean filter, and performed at least on par with the other realistic methods for almost all time series positions in all three filtering scenarios.


Geophysics ◽  
2007 ◽  
Vol 72 (3) ◽  
pp. S149-S154 ◽  
Author(s):  
Antoine Guitton ◽  
Alejandro Valenciano ◽  
Dimitri Bevc ◽  
Jon Claerbout

Amplitudes in shot-profile migration can be improved if the imaging condition incorporates a division (deconvolution in the time domain) of the upgoing wavefield by the downgoing wavefield. This division can be enhanced by introducing an optimal Wiener filter which assumes that the noise present in the data has a white spectrum. This assumption requires a damping parameter, related to the signal-to-noise ratio, often chosen by trial and error. In practice, the damping parameter replaces the small values of the spectrum of the downgoing wavefield and avoids division by zero. The migration results can be quite sensitive to the damping parameter, and in most applications, the upgoing and downgoing wavefields are simply multiplied. Alternatively, the division can be made stable by filling the small values of thespectrum with an average of the neighboring points. This averaging is obtained by running a smoothing operator on the spectrum of the downgoing wavefield. This operation called the smoothing imaging condition. Our results show that where the spectrum of the downgoing wavefield is high, the imaging condition with damping and smoothing yields similar results, thus correcting for illumination effects. Where the spectrum is low, the smoothing imaging condition tends to be more robust to the noise level present in the data, thus giving better images than the imaging condition with damping. In addition, our experiments indicate that the parameterization of the smoothing imaging condition, i.e., choice of window size for the smoothing operator, is easy and repeatable from one data set to another, making it a valuable addition to our imaging toolbox.


Author(s):  
Z. Charlie Zheng ◽  
Guoyi Ke

Conventional time-domain schemes have limited capability in modeling long-range acoustic propagation because of the vast computer resources needed to cover the entire region of interest with a computational domain. Many of the long-range acoustic propagation problems need to consider propagation distances of hundreds or thousands of meters. It is thus very difficult to maintain adequate grid resolution for such a large computational domain, even with the state-of-the-art capacity in computer memory and computing speed. In order to overcome this barrier, a moving zonal-domain approach is developed. This concept uses a moving computational domain that follows an acoustic wave. The size and interval of motion of the domain are problem dependent. In this paper, an Euler-type moving domain in a stationary coordinate frame is first tested. Size effects and boundary conditions for the moving domain are considered. The results are compared and verified with both analytical solutions and results from the non-zonal domain. Issues of using the moving zonal-domain with perfectly-matched layers for the free-space boundary are also discussed.


1999 ◽  
Vol 3 (1) ◽  
pp. 69-83 ◽  
Author(s):  
Hui Boon Tan ◽  
Richard Ashley

A simple technique for directly testing the parameters of a time-series regression model for instability across frequencies is presented. The method can be implemented easily in the time domain, so that parameter instability across frequency bands can be conveniently detected and modeled in conjunction with other econometric features of the problem at hand, such as simultaneity, cointegration, missing observations, and cross-equation restrictions. The usefulness of the new technique is illustrated with an application to a cointegrated consumption-income regression model, yielding a straightforward test of the permanent income hypothesis.


Author(s):  
Simon Vaughan

Progress in astronomy comes from interpreting the signals encoded in the light received from distant objects: the distribution of light over the sky (images), over photon wavelength (spectrum), over polarization angle and over time (usually called light curves by astronomers). In the time domain, we see transient events such as supernovae, gamma-ray bursts and other powerful explosions; we see periodic phenomena such as the orbits of planets around nearby stars, radio pulsars and pulsations of stars in nearby galaxies; and we see persistent aperiodic variations (‘noise’) from powerful systems such as accreting black holes. I review just a few of the recent and future challenges in the burgeoning area of time domain astrophysics, with particular attention to persistently variable sources, the recovery of reliable noise power spectra from sparsely sampled time series, higher order properties of accreting black holes, and time delays and correlations in multi-variate time series.


2020 ◽  
Vol 24 (11) ◽  
pp. 5473-5489 ◽  
Author(s):  
Justin Schulte ◽  
Frederick Policielli ◽  
Benjamin Zaitchik

Abstract. Wavelet coherence is a method that is commonly used in hydrology to extract scale-dependent, nonstationary relationships between time series. However, we show that the method cannot always determine why the time-domain correlation between two time series changes in time. We show that, even for stationary coherence, the time-domain correlation between two time series weakens if at least one of the time series has changing skewness. To overcome this drawback, a nonlinear coherence method is proposed to quantify the cross-correlation between nonlinear modes embedded in the time series. It is shown that nonlinear coherence and auto-bicoherence spectra can provide additional insight into changing time-domain correlations. The new method is applied to the El Niño–Southern Oscillation (ENSO) and all-India rainfall (AIR), which is intricately linked to hydrological processes across the Indian subcontinent. The nonlinear coherence analysis showed that the skewness of AIR is weakly correlated with that of two ENSO time series after the 1970s, indicating that increases in ENSO skewness after the 1970s at least partially contributed to the weakening ENSO–AIR relationship in recent decades. The implication of this result is that the intensity of skewed El Niño events is likely to overestimate India's drought severity, which was the case in the 1997 monsoon season, a time point when the nonlinear wavelet coherence between AIR and ENSO reached its lowest value in the 1871–2016 period. We determined that the association between the weakening ENSO–AIR relationship and ENSO nonlinearity could reflect the contribution of different nonlinear ENSO modes to ENSO diversity.


2009 ◽  
Vol 6 (2) ◽  
pp. 2451-2498 ◽  
Author(s):  
B. Schaefli ◽  
E. Zehe

Abstract. This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series) of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights which frequencies are not well reproduced by the model, which gives specific indications about how to improve the model structure.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 1046
Author(s):  
Changyou Suo ◽  
Zhonghua Li ◽  
Yunlong Sun ◽  
Yongsen Han

The current time domain spectroscopy of dielectrics provides important information for the analysis of dielectric properties and mechanisms. However, there is always interference during the testing process, which seriously affects the analysis of the test results. Therefore, the effective filtering of current time domain spectroscopy is particularly necessary. L1 trend filtering can estimate the trend items exactly in a set of time series. It has been widely used in the fields of economics and sociology. Therefore, this paper attempts to apply L1 trend filtering to the current time domain spectroscopy. Firstly, polarization and depolarization currents are measured in the laboratory. Then the test results are filtered by L1 trend filtering and the filtering effects are compared with several common filtering algorithms, such as a sliding mean filter and Savitzky–Golay smoothing filter. Finally, the robustness and time complexity of L1 trend filtering are analyzed. The filtering results show that because the polarization currents vary in a wide range of the time domain (about 2–3 orders of magnitude), smooth and undistorted curves in the whole test time range can hardly be obtained through common filtering algorithms, while they can be obtained by L1 trend filtering. The results of robustness analysis and time complexity analysis show that L1 trend filtering can extract the trend items accurately in the time series under given different noise levels, and the execution time is also lower than 176.67 s when the number of tested points is no more than 20,000. Those results show that L1 trend filtering can be applied to the time domain current spectroscopy of dielectrics.


Sign in / Sign up

Export Citation Format

Share Document