Numerical comparison of time-, frequency-, and wavelet-domain methods for coda wave interferometry

Author(s):  
Congcong Yuan ◽  
Jared Bryan ◽  
Marine Denolle

Summary Temporal changes in subsurface properties, such as seismic wavespeeds, can be monitored by measuring phase shifts in the coda of two seismic waveforms that share a similar source-receiver path but that are recorded at different times. These nearly identical seismic waveforms are usually obtained either from repeated earthquake waveforms or from repeated ambient noise cross-correlations. The five algorithms that are the most popular to measure phase shifts in the coda waves are the Windowed Cross Correlation (WCC), Trace Stretching (TS), Dynamic Time Warping (DTW), Moving Window Cross Spectrum (MWCS), and Wavelet Cross Spectrum (WCS). The seismic wavespeed perturbation is then obtained from the linear regression of phase shifts with their respective lag times under the assumption that the velocity perturbation is homogeneous between (virtual or active) source and receiver. We categorize these methods into the time domain (WCC, TS, DTW), frequency domain (MWCS), and wavelet domain (WCS). This study complements this suite of algorithms with two additional wavelet-domain methods, which we call Wavelet Transform Stretching (WTS) and Wavelet Transform Dynamic Time Warping (WTDTW), wherein we apply traditional stretching and dynamic time warping techniques to the wavelet transform. This work aims to verify, validate, and test the accuracy and performance of all methods by performing numerical experiments, in which the elastic wavefields are solved for in various 2D heterogeneous halfspace geometries. Through this work, we validate the assumption of a linear increase in phase shifts with respect to phase lags as a valid argument for fully homogeneous and laterally homogeneous velocity changes. Additionally, we investigate the sensitivity of coda waves at various seismic frequencies to the depth of the velocity perturbation. Overall, we conclude that seismic wavefields generated and recorded at the surface lose sensitivity rapidly with increasing depth of the velocity change for all source-receiver offsets. However, measurements made over a spectrum of seismic frequencies exhibit a pattern such that wavelet methods, and especially WTS, provide useful information to infer the depth of the velocity changes.

Author(s):  
Sylvio Barbon Junior ◽  
Rodrigo Capobianco Guido ◽  
Shi-Huang Chen ◽  
Lucimar Sasso Vieira ◽  
Fabricio Lopes Sanchez

Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. V27-V37 ◽  
Author(s):  
Shuangquan Chen ◽  
Song Jin ◽  
Xiang-Yang Li ◽  
Wuyang Yang

Normal-moveout (NMO) correction is one of the most important routines in seismic processing. NMO is usually implemented by a sample-by-sample procedure; unfortunately, such implementation not only decreases the frequency content but also distorts the amplitude of seismic waveforms resulting from the well-known stretch. The degree of stretch increases with increasing offset. To minimize severe stretch associated with far offset, we use a dynamic time warping (DTW) algorithm to achieve an automatic dynamic matching NMO nonstretch correction, which does not handle crossing events and convoluted events such as thin layers. Our algorithm minimizes the stretch through an automatic static temporal correction of seismic wavelets. The local static time shifts are obtained using a DTW algorithm, which is a nonlinear optimization method. To mitigate the influence of noise, we evaluated a multitrace window strategy to improve the signal-to-noise ratio of seismic data by obtaining a more precise moveout correction at far-offset traces. To illustrate the effectiveness of our algorithm, we first applied our method to synthetic data and then to field seismic data. Both tests illustrate that our algorithm minimizes the stretch associated with far offsets, and the method preserves the amplitude fidelity.


2007 ◽  
Vol 01 (03) ◽  
pp. 347-357 ◽  
Author(s):  
RODRIGO CAPOBIANCO GUIDO ◽  
SYLVIO BARBON JUNIOR ◽  
LUCIMAR SASSO VIEIRA ◽  
FABRÍCIO LOPES SANCHEZ ◽  
CARLOS DIAS MACIEL ◽  
...  

This work presents a spoken document summarization (SDS) scheme that is based on an improved version of the Dynamic Time Warping (DTW) algorithm, and on the Discrete Wavelet Transform (DWT). Tests and results with sentences extracted from TIMIT speech corpus show the efficacy of the proposed technique.


Author(s):  
Sylvio Barbon Junior ◽  
Rodrigo Capobianco Guido ◽  
Shi-Huang Chen ◽  
Lucimar Sasso Vieira ◽  
Fabricio Lopes Sanchez

2020 ◽  
Author(s):  
Ebrahim Eslami ◽  
Yunsoo Choi ◽  
Yannic Lops ◽  
Alqamah Sayeed ◽  
Ahmed Khan Salman

Abstract. As the deep learning algorithm has become a popular data analytic technique, atmospheric scientists should have a balanced perception of its strengths and limitations so that they can provide a powerful analysis of complex data with well-established procedures. Despite the enormous success of the algorithm in numerous applications, certain issues related to its applications in air quality forecasting (AQF) require further analysis and discussion. This study addresses significant limitations of an advanced deep learning algorithm, the convolutional neural network (CNN), in two common applications: (i) a real-time AQF model, and (ii) a post-processing tool in a dynamical AQF model, the Community Multi-scale Air Quality Model (CMAQ). In both cases, the CNN model shows promising accuracy for ozone prediction 24 hours in advance in both the United States and South Korea (with an overall index of agreement exceeding 0.8). For the first case, we use the wavelet transform to determine the reasons behind the poor performance of CNN during the nighttime, cold months, and high ozone episodes. We find that when fine wavelet modes (hourly and daily) are relatively weak or when coarse wavelet modes (weekly) are strong, the CNN model produces less accurate forecasts. For the second case, we use the dynamic time warping (DTW) distance analysis to compare post-processed results with their CMAQ counterparts (as a base model). For CMAQ results that show a consistent DTW distance from the observation, the post-processing approach properly addresses the modeling bias with predicted IOAs exceeding 0.85. When the DTW distance of CMAQ-vs-observation is irregular, the post-processing approach is unlikely to perform satisfactorily. Awareness of the limitations in CNN models will enable scientists to develop more accurate regional or local air quality forecasting systems by identifying the affecting factors in high concentration episodes.


2020 ◽  
Vol 13 (12) ◽  
pp. 6237-6251
Author(s):  
Ebrahim Eslami ◽  
Yunsoo Choi ◽  
Yannic Lops ◽  
Alqamah Sayeed ◽  
Ahmed Khan Salman

Abstract. As the deep learning algorithm has become a popular data analysis technique, atmospheric scientists should have a balanced perception of its strengths and limitations so that they can provide a powerful analysis of complex data with well-established procedures. Despite the enormous success of the algorithm in numerous applications, certain issues related to its applications in air quality forecasting (AQF) require further analysis and discussion. This study addresses significant limitations of an advanced deep learning algorithm, the convolutional neural network (CNN), in two common applications: (i) a real-time AQF model and (ii) a post-processing tool in a dynamical AQF model, the Community Multi-scale Air Quality Model (CMAQ). In both cases, the CNN model shows promising accuracy for ozone prediction 24 h in advance in both the United States of America and South Korea (with an overall index of agreement exceeding 0.8). For the first case, we use the wavelet transform to determine the reasons behind the poor performance of CNN during the nighttime, cold months, and high-ozone episodes. We find that when fine wavelet modes (hourly and daily) are relatively weak or when coarse wavelet modes (weekly) are strong, the CNN model produces less accurate forecasts. For the second case, we use the dynamic time warping (DTW) distance analysis to compare post-processed results with their CMAQ counterparts (as a base model). For CMAQ results that show a consistent DTW distance from the observation, the post-processing approach properly addresses the modeling bias with predicted indexes of agreement exceeding 0.85. When the DTW distance of CMAQ versus observation is irregular, the post-processing approach is unlikely to perform satisfactorily. Awareness of the limitations in CNN models will enable scientists to develop more accurate regional or local air quality forecasting systems by identifying the affecting factors in high-concentration episodes.


Sign in / Sign up

Export Citation Format

Share Document