scholarly journals FAST: An extension of the Wavelet Synchrosqueezed Transform

Author(s):  
Amey Desai ◽  
Thomas Richards ◽  
Samit Chakrabarty

<p>Extracting frequency domain information from signals usually requires conversion from the time domain using methods such as Fourier, wavelet, or Hilbert transforms. Each method of transformation is subject to a theoretical limit on resolution due to Heisenberg’s uncertainty principle. Different methods of transformation approach this limit through different trade-offs in resolution along the frequency and time axes in the frequency domain representation. One of the better and more versatile methods of transformation is the wavelet transform, which makes a closer approach to the limit of resolution using a technique called synchrosqueezing. While this produces clearer results than the conventional wavelet transforms, it does not address a few critical areas. In complex signals that are com-posed of multiple independent components, frequency domain representation via synchrosqueezed wavelet transformation may show artifacts at the instants where components are not well separated in frequency. These artifacts significantly obscure the frequency distribution. In this paper, we present a technique that improves upon this aspect of the wavelet synchrosqueezed transform and improves resolution of the transformation. This is achieved through bypassing the limit on resolution using multiple sources of information as opposed to a single transform.</p>

2021 ◽  
Author(s):  
Samit Chakrabarty ◽  
Amey Desai ◽  
Thomas Richards

<p>Extracting frequency domain information from signals usually requires conversion from the time domain using methods such as Fourier, wavelet, or Hilbert transforms. Each method of transformation is subject to a theoretical limit on resolution due to Heisenberg’s uncertainty principle. Different methods of transformation approach this limit through different trade-offs in resolution along the frequency and time axes in the frequency domain representation. One of the better and more versatile methods of transformation is the wavelet transform, which makes a closer approach to the limit of resolution using a technique called synchrosqueezing. While this produces clearer results than the conventional wavelet transforms, it does not address a few critical areas. In complex signals that are com-posed of multiple independent components, frequency domain representation via synchrosqueezed wavelet transformation may show artifacts at the instants where components are not well separated in frequency. These artifacts significantly obscure the frequency distribution. In this paper, we present a technique that improves upon this aspect of the wavelet synchrosqueezed transform and improves resolution of the transformation. This is achieved through bypassing the limit on resolution using multiple sources of information as opposed to a single transform.</p>


2021 ◽  
Author(s):  
Samit Chakrabarty ◽  
Amey Desai ◽  
Thomas Richards

<p>Extracting frequency domain information from signals usually requires conversion from the time domain using methods such as Fourier, wavelet, or Hilbert transforms. Each method of transformation is subject to a theoretical limit on resolution due to Heisenberg’s uncertainty principle. Different methods of transformation approach this limit through different trade-offs in resolution along the frequency and time axes in the frequency domain representation. One of the better and more versatile methods of transformation is the wavelet transform, which makes a closer approach to the limit of resolution using a technique called synchrosqueezing. While this produces clearer results than the conventional wavelet transforms, it does not address a few critical areas. In complex signals that are com-posed of multiple independent components, frequency domain representation via synchrosqueezed wavelet transformation may show artifacts at the instants where components are not well separated in frequency. These artifacts significantly obscure the frequency distribution. In this paper, we present a technique that improves upon this aspect of the wavelet synchrosqueezed transform and improves resolution of the transformation. This is achieved through bypassing the limit on resolution using multiple sources of information as opposed to a single transform.</p>


Author(s):  
Richard Kurle ◽  
Stephan Günnemann ◽  
Patrick Van der Smagt

Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.


Author(s):  
Hiroshi Toda ◽  
Zhong Zhang ◽  
Takashi Imamura

The theorems giving the conditions for discrete wavelet transforms (DWTs) to achieve perfect translation invariance (PTI) have already been proven, and based on these theorems, the dual-tree complex DWT and the complex wavelet packet transform, achieving PTI, have already been proposed. However, there is not so much flexibility in their wavelet density. In the frequency domain, the wavelet density is fixed by octave filter banks, and in the time domain, each wavelet is arrayed on a fixed coordinate, and the wavelet packet density in the frequency domain can be only designed by dividing an octave frequency band equally in linear scale, and its density in the time domain is constrained by the division number of an octave frequency band. In this paper, a novel complex DWT is proposed to create variable wavelet density in the frequency and time domains, that is, an octave frequency band can be divided into N filter banks in logarithmic scale, where N is an integer larger than or equal to 3, and in the time domain, a distance between wavelets can be varied in each level, and its transform achieves PTI.


Geophysics ◽  
2013 ◽  
Vol 78 (4) ◽  
pp. E161-E171 ◽  
Author(s):  
M. Zaslavsky ◽  
V. Druskin ◽  
A. Abubakar ◽  
T. Habashy ◽  
V. Simoncini

Transient data controlled-source electromagnetic measurements are usually interpreted via extracting few frequencies and solving the corresponding inverse frequency-domain problem. Coarse frequency sampling may result in loss of information and affect the quality of interpretation; however, refined sampling increases computational cost. Fitting data directly in the time domain has similar drawbacks, i.e., its large computational cost, in particular, when the Gauss-Newton (GN) algorithm is used for the misfit minimization. That cost is mainly comprised of the multiple solutions of the forward problem and linear algebraic operations using the Jacobian matrix for calculating the GN step. For large-scale 2.5D and 3D problems with multiple sources and receivers, the corresponding cost grows enormously for inversion algorithms using conventional finite-difference time-domain (FDTD) algorithms. A fast 3D forward solver based on the rational Krylov subspace (RKS) reduction algorithm using an optimal subspace selection was proposed earlier to partially mitigate this problem. We applied the same approach to reduce the size of the time-domain Jacobian matrix. The reduced-order model (ROM) is obtained by projecting a discretized large-scale Maxwell system onto an RKS with optimized poles. The RKS expansion replaces the time discretization for forward and inverse problems; however, for the same or better accuracy, its subspace dimension is much smaller than the number of time steps of the conventional FDTD. The crucial new development of this work is the space-time data compression of the ROM forward operator and decomposition of the ROM’s time-domain Jacobian matrix via chain rule, as a product of time- and space-dependent terms, thus effectively decoupling the discretizations in the time and parameter spaces. The developed technique can be equivalently applied to finely sampled frequency-domain data. We tested our approach using synthetic 2.5D examples of hydrocarbon reservoirs in the marine environment.


Geophysics ◽  
1990 ◽  
Vol 55 (5) ◽  
pp. 626-632 ◽  
Author(s):  
R. Gerhard Pratt

The migration, imaging, or inversion of wide‐aperture cross‐hole data depends on the ability to model wave propagation in complex media for multiple source positions. Computational costs can be considerably reduced in frequency‐domain imaging by modeling the frequency‐domain steady‐state equations, rather than the time‐domain equations of motion. I develop a frequency‐domain approach in this note that is competitive with time‐domain modeling when solutions for multiple sources are required or when only a limited number of frequency components of the solution are required.


2020 ◽  
Vol 12 (4) ◽  
pp. 599 ◽  
Author(s):  
Akash Koppa ◽  
Mekonnen Gebremichael

Food, energy, and water (FEW) nexus studies require reliable estimates of water availability, use, and demand. In this regard, spatially distributed hydrologic models are widely used to estimate not only streamflow (SF) but also different components of the water balance such as evapotranspiration (ET), soil moisture (SM), and groundwater. For such studies, the traditional calibration approach of using SF observations is inadequate. To address this, we use state-of-the-art global remote sensing-based estimates of ET and SM with a multivariate calibration methodology to improve the applicability of a widely used spatially distributed hydrologic model (Noah-MP) for FEW nexus studies. Specifically, we conduct univariate and multivariate calibration experiments in the Mississippi river basin with ET, SM, and SF to understand the trade-offs in accurately simulating ET, SM, and SF simultaneously. Results from univariate calibration with just SF reveal that increased accuracy in SF at the cost of degrading the spatio-temporal accuracy of ET and SM, which is essential for FEW nexus studies. We show that multivariate calibration helps preserve the accuracy of all the components involved in calibration. The study emphasizes the importance of multiple sources of information, especially from satellite remote sensing, for improving FEW nexus studies.


1991 ◽  
Vol 113 (2) ◽  
pp. 195-205 ◽  
Author(s):  
S. Jayasuriya ◽  
M. A. Franchek

A frequency domain methodology for synthesizing controllers for SISO systems under persistent bounded disturbances is presented. The control objective is to maximize the disturbance magnitude without violating prespecified state, control and bandwidth constraints. These constraints are treated explicitly in the design process. State and control constraints expressed in the time domain are first mapped into a set of equivalent frequency domain design specifications. The latter specifications define a set of frequency domain constraints on admissible loop transfer functions. These constraints are then displayed on a Nichols chart highlighting the dependency of the loop gains on phase and frequency. The final step in the process is to follow a loop shaping procedure to satisfy the frequency domain constraints. In the proposed methodology, the structure of the controller emerges naturally as a consequence of loop shaping and is not preconceived. The design procedure is semi-graphical and clearly demonstrates the design trade-offs at each frequency of interest. The effectiveness of the design method is illustrated by synthesizing a controller for a third order boiler-turbine set.


2018 ◽  
Vol 12 (7-8) ◽  
pp. 76-83
Author(s):  
E. V. KARSHAKOV ◽  
J. MOILANEN

Тhe advantage of combine processing of frequency domain and time domain data provided by the EQUATOR system is discussed. The heliborne complex has a towed transmitter, and, raised above it on the same cable a towed receiver. The excitation signal contains both pulsed and harmonic components. In fact, there are two independent transmitters operate in the system: one of them is a normal pulsed domain transmitter, with a half-sinusoidal pulse and a small "cut" on the falling edge, and the other one is a classical frequency domain transmitter at several specially selected frequencies. The received signal is first processed to a direct Fourier transform with high Q-factor detection at all significant frequencies. After that, in the spectral region, operations of converting the spectra of two sounding signals to a single spectrum of an ideal transmitter are performed. Than we do an inverse Fourier transform and return to the time domain. The detection of spectral components is done at a frequency band of several Hz, the receiver has the ability to perfectly suppress all sorts of extra-band noise. The detection bandwidth is several dozen times less the frequency interval between the harmonics, it turns out thatto achieve the same measurement quality of ground response without using out-of-band suppression you need several dozen times higher moment of airborne transmitting system. The data obtained from the model of a homogeneous half-space, a two-layered model, and a model of a horizontally layered medium is considered. A time-domain data makes it easier to detect a conductor in a relative insulator at greater depths. The data in the frequency domain gives more detailed information about subsurface. These conclusions are illustrated by the example of processing the survey data of the Republic of Rwanda in 2017. The simultaneous inversion of data in frequency domain and time domain can significantly improve the quality of interpretation.


2019 ◽  
Vol 40 (03) ◽  
pp. 151-161 ◽  
Author(s):  
Sebastian Doeltgen ◽  
Stacie Attrill ◽  
Joanne Murray

AbstractProficient clinical reasoning is a critical skill in high-quality, evidence-based management of swallowing impairment (dysphagia). Clinical reasoning in this area of practice is a cognitively complex process, as it requires synthesis of multiple sources of information that are generated during a thorough, evidence-based assessment process and which are moderated by the patient's individual situations, including their social and demographic circumstances, comorbidities, or other health concerns. A growing body of health and medical literature demonstrates that clinical reasoning skills develop with increasing exposure to clinical cases and that the approaches to clinical reasoning differ between novices and experts. It appears that it is not the amount of knowledge held, but the way it is used, that distinguishes a novice from an experienced clinician. In this article, we review the roles of explicit and implicit processing as well as illness scripts in clinical decision making across the continuum of medical expertise and discuss how they relate to the clinical management of swallowing impairment. We also reflect on how this literature may inform educational curricula that support SLP students in developing preclinical reasoning skills that facilitate their transition to early clinical practice. Specifically, we discuss the role of case-based curricula to assist students to develop a meta-cognitive awareness of the different approaches to clinical reasoning, their own capabilities and preferences, and how and when to apply these in dysphagia management practice.


Sign in / Sign up

Export Citation Format

Share Document