scholarly journals A benchmark case study for seismic event relative location

2020 ◽  
Vol 223 (2) ◽  
pp. 1313-1326
Author(s):  
S J Gibbons ◽  
T Kværna ◽  
T Tiira ◽  
E Kozlovskaya

Summary ‘Precision seismology’ encompasses a set of methods which use differential measurements of time-delays to estimate the relative locations of earthquakes and explosions. Delay-times estimated from signal correlations often allow far more accurate estimates of one event location relative to another than is possible using classical hypocentre determination techniques. Many different algorithms and software implementations have been developed and different assumptions and procedures can often result in significant variability between different relative event location estimates. We present a Ground Truth (GT) dataset of 55 military surface explosions in northern Finland in 2007 that all took place within 300 m of each other. The explosions were recorded with a high signal-to-noise ratio to distances of about 2°, and the exceptional waveform similarity between the signals from the different explosions allows for accurate correlation-based time-delay measurements. With exact coordinates for the explosions, we are able to assess the fidelity of relative location estimates made using any location algorithm or implementation. Applying double-difference calculations using two different 1-D velocity models for the region results in hypocentre-to-hypocentre distances which are too short and it is clear that the wavefield leaving the source region is more complicated than predicted by the models. Using the GT event coordinates, we are able to measure the slowness vectors associated with each outgoing ray from the source region. We demonstrate that, had such corrections been available, a significant improvement in the relative location estimates would have resulted. In practice we would of course need to solve for event hypocentres and slowness corrections simultaneously, and significant work will be needed to upgrade relative location algorithms to accommodate uncertainty in the form of the outgoing wavefield. We present this data set, together with GT coordinates, raw waveforms for all events on six regional stations, and tables of time-delay measurements, as a reference benchmark by which relative location algorithms and software can be evaluated.

2020 ◽  
Author(s):  
Tormod Kvaerna ◽  
Steven J. Gibbons ◽  
Timo Tiira ◽  
Elena Kozlovskaya

<p>"Precision seismology'' encompasses a set of methods which use differential measurements of time-delays to estimate the relative locations of earthquakes and explosions.  Delay-times estimated from signal correlations often allow far more accurate estimates of one event location relative to another than is possible using classical hypocenter determination techniques.  Many different algorithms and software implementations have been developed and different assumptions and procedures can often result in significant variability between different relative event location estimates.  We present a Ground Truth (GT) database of 55 military surface explosions in northern Finland in 2007 that all took place within 300 meters of each other.  The explosions were recorded with a high signal-to-noise ratio to distances of about 2 degrees, and the exceptional waveform similarity between the signals from the different explosions allows for accurate correlation-based time-delay measurements.  With exact coordinates for the explosions, we can assess the fidelity of relative location estimates made using any location algorithm or implementation.  Applying double-difference calculations using two different 1-d velocity models for the region results in hypocenter-to-hypocenter distances which are too short and the wavefield leaving the source region is more complicated than predicted by the models.  Using the GT event coordinates, we can measure the slowness vectors associated with each outgoing ray from the source region. We demonstrate that, had such corrections been available, a significant improvement in the relative location estimates would have resulted.  In practice we would of course need to solve for event hypocenters and slowness corrections simultaneously, and significant work will be needed to upgrade relative location algorithms to accommodate uncertainty in the form of the outgoing wavefield.  We present this dataset, together with GT coordinates, raw waveforms for all events on six regional stations, and tables of time-delay measurements, as a reference benchmark by which relative location algorithms and software can be evaluated.</p>


Author(s):  
Quan Sun ◽  
Zhen Guo ◽  
Shunping Pei ◽  
Yuanyuan V. Fu ◽  
Yongshun John Chen

Abstract On 21 May 2021 a magnitude Mw 6.1 earthquake occurred in Yangbi region, Yunan, China, which was widely felt and caused heavy casualties. Imaging of the source region was conducted using our improved double-difference tomography method on the huge data set recorded by 107 temporary stations of ChinArray-I and 62 permanent stations. Pronounced structural heterogeneities across the rupture source region are discovered and locations of the hypocenters of the Yangbi earthquake sequence are significantly improved as the output of the inversion. The relocated Yangbi earthquake sequence is distributed at an unmapped fault that is almost parallel and adjacent (∼15 km distance) to the Tongdian–Weishan fault (TWF) at the northern end of the Red River fault zone. Our high-resolution 3D velocity models show significant high-velocity and low-VP/VS ratios in the upper crust of the rupture zone, suggesting the existence of an asperity for the event. More importantly, low-VS and high-VP/VS anomalies below 10 km depth are imaged underlying the source region, indicating the existence of fluids and potential melts at those depths. Upward migration of the fluids and potential melts into the rupture zone could have weakened the locked asperity and triggered the occurrence of the Yangbi earthquake. The triggering effect by upflow fluids could explain why the Yangbi earthquake did not occur at the adjacent TWF where a high-stress accumulation was expected. We speculate that the fluids and potential melts in the mid-to-lower crust might have originated either from crustal channel flow from the southeast Tibet or from local upwelling related to subduction of the Indian slab to the west.


2018 ◽  
Vol 615 ◽  
pp. A145 ◽  
Author(s):  
M. Mol Lous ◽  
E. Weenk ◽  
M. A. Kenworthy ◽  
K. Zwintz ◽  
R. Kuschnig

Context. Transiting exoplanets provide an opportunity for the characterization of their atmospheres, and finding the brightest star in the sky with a transiting planet enables high signal-to-noise ratio observations. The Kepler satellite has detected over 365 multiple transiting exoplanet systems, a large fraction of which have nearly coplanar orbits. If one planet is seen to transit the star, then it is likely that other planets in the system will transit the star too. The bright (V = 3.86) star β Pictoris is a nearby young star with a debris disk and gas giant exoplanet, β Pictoris b, in a multi-decade orbit around it. Both the planet’s orbit and disk are almost edge-on to our line of sight. Aims. We carry out a search for any transiting planets in the β Pictoris system with orbits of less than 30 days that are coplanar with the planet β Pictoris b. Methods. We search for a planetary transit using data from the BRITE-Constellation nanosatellite BRITE-Heweliusz, analyzing the photometry using the Box-Fitting Least Squares Algorithm (BLS). The sensitivity of the method is verified by injection of artificial planetary transit signals using the Bad-Ass Transit Model cAlculatioN (BATMAN) code. Results. No planet was found in the BRITE-Constellation data set. We rule out planets larger than 0.6 RJ for periods of less than 5 days, larger than 0.75 RJ for periods of less than 10 days, and larger than 1.05 RJ for periods of less than 20 days.


2020 ◽  
Vol 636 ◽  
pp. A74 ◽  
Author(s):  
Trifon Trifonov ◽  
Lev Tal-Or ◽  
Mathias Zechmeister ◽  
Adrian Kaminski ◽  
Shay Zucker ◽  
...  

Context. The High Accuracy Radial velocity Planet Searcher (HARPS) spectrograph has been mounted since 2003 at the ESO 3.6 m telescope in La Silla and provides state-of-the-art stellar radial velocity (RV) measurements with a precision down to ∼1 m s−1. The spectra are extracted with a dedicated data-reduction software (DRS), and the RVs are computed by cross-correlating with a numerical mask. Aims. This study has three main aims: (i) Create easy access to the public HARPS RV data set. (ii) Apply the new public SpEctrum Radial Velocity AnaLyser (SERVAL) pipeline to the spectra, and produce a more precise RV data set. (iii) Determine whether the precision of the RVs can be further improved by correcting for small nightly systematic effects. Methods. For each star observed with HARPS, we downloaded the publicly available spectra from the ESO archive and recomputed the RVs with SERVAL. This was based on fitting each observed spectrum with a high signal-to-noise ratio template created by coadding all the available spectra of that star. We then computed nightly zero-points (NZPs) by averaging the RVs of quiet stars. Results. By analyzing the RVs of the most RV-quiet stars, whose RV scatter is < 5 m s−1, we find that SERVAL RVs are on average more precise than DRS RVs by a few percent. By investigating the NZP time series, we find three significant systematic effects whose magnitude is independent of the software that is used to derive the RV: (i) stochastic variations with a magnitude of ∼1 m s−1; (ii) long-term variations, with a magnitude of ∼1 m s−1 and a typical timescale of a few weeks; and (iii) 20–30 NZPs that significantly deviate by a few m s−1. In addition, we find small (≲1 m s−1) but significant intra-night drifts in DRS RVs before the 2015 intervention, and in SERVAL RVs after it. We confirm that the fibre exchange in 2015 caused a discontinuous RV jump that strongly depends on the spectral type of the observed star: from ∼14 m s−1 for late F-type stars to ∼ − 3 m s−1 for M dwarfs. The combined effect of extracting the RVs with SERVAL and correcting them for the systematics we find is an improved average RV precision: an improvement of ∼5% for spectra taken before the 2015 intervention, and an improvement of ∼15% for spectra taken after it. To demonstrate the quality of the new RV data set, we present an updated orbital solution of the GJ 253 two-planet system. Conclusions. Our NZP-corrected SERVAL RVs can be retrieved from a user-friendly public database. It provides more than 212 000 RVs for about 3000 stars along with much auxiliary information, such as the NZP corrections, various activity indices, and DRS-CCF products.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


2020 ◽  
Vol 21 (S1) ◽  
Author(s):  
Daniel Ruiz-Perez ◽  
Haibin Guan ◽  
Purnima Madhivanan ◽  
Kalai Mathee ◽  
Giri Narasimhan

Abstract Background Partial Least-Squares Discriminant Analysis (PLS-DA) is a popular machine learning tool that is gaining increasing attention as a useful feature selector and classifier. In an effort to understand its strengths and weaknesses, we performed a series of experiments with synthetic data and compared its performance to its close relative from which it was initially invented, namely Principal Component Analysis (PCA). Results We demonstrate that even though PCA ignores the information regarding the class labels of the samples, this unsupervised tool can be remarkably effective as a feature selector. In some cases, it outperforms PLS-DA, which is made aware of the class labels in its input. Our experiments range from looking at the signal-to-noise ratio in the feature selection task, to considering many practical distributions and models encountered when analyzing bioinformatics and clinical data. Other methods were also evaluated. Finally, we analyzed an interesting data set from 396 vaginal microbiome samples where the ground truth for the feature selection was available. All the 3D figures shown in this paper as well as the supplementary ones can be viewed interactively at http://biorg.cs.fiu.edu/plsda Conclusions Our results highlighted the strengths and weaknesses of PLS-DA in comparison with PCA for different underlying data models.


2007 ◽  
Vol 7 (3) ◽  
pp. 7907-7932 ◽  
Author(s):  
P.-F. Coheur ◽  
H. Herbin ◽  
C. Clerbaux ◽  
D. Hurtmans ◽  
C. Wespes ◽  
...  

Abstract. In the course of our study of the upper tropospheric composition with the infrared Atmospheric Chemistry Experiment – Fourier Transform Spectrometer (ACE–FTS), we found an occultation sequence that on 8 October 2005, sampled a remarkable plume near the east coast of Tanzania. Model simulations of the CO distribution in the Southern hemisphere are performed for this period and they demonstrate that the emissions for this event originated from a nearby forest fire, after which the plume was transported from the source region to the upper troposphere. Taking advantage of the very high signal-to-noise ratio of the ACE–FTS spectra over a wide wavenumber range (750–4400 cm−1), we present in-depth analyses of the chemical composition of this plume in the middle and upper troposphere, focusing on the measurements of weakly absorbing pollutants. For this specific biomass burning event, we report simultaneous observations of an unprecedented number of organic species. Measurements of C2H4 (ethene), C3H4 (propyne), H2CO (formaldehyde), C3H6O (acetone) and CH3COO2NO2 (peroxyacetylnitrate, abbreviated as PAN) are the first reported detections using infrared occultation spectroscopy from satellites. Based on the lifetime of the emitted species, we discuss the photochemical age of the plume and also report, whenever possible, the enhancement ratios relative to CO.


2021 ◽  
Vol 11 (1) ◽  
pp. 21-32
Author(s):  
Cristian Alexis Murillo Martínez ◽  
William Mauricio Agudelo

Accuracy of earthquake location methods is dependent upon the quality of input data. In the real world, several sources of uncertainty, such as incorrect velocity models, low Signal to Noise Ratio (SNR), and poor coverage, affect the solution. Furthermore, some complex seismic signals exist without distinguishable phases for which conventional location methods are not applicable. In this work, we conducted a sensitivity analysis of Back-Projection Imaging (BPI), which is a technique suitable for location of conventional seismicity, induced seismicity, and tremor-like signals. We performed a study where synthetic data is modelled as fixed spectrum explosive sources. The purpose of using such simplified signals is to fully understand the mechanics of the location method in controlled scenarios, where each parameter can be freely perturbed to ensure that their individual effects are shown separately on the outcome. The results suggest the need for data conditioning such as noise removal to improve image resolution and minimize artifacts. Processing lower frequency signal increases stability, while higher frequencies improve accuracy. In addition, a good azimuthal coverage reduces the spatial location error of seismic events, where, according to our findings, depth is the most sensitive spatial coordinate to velocity and geometry changes.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. KS71-KS91 ◽  
Author(s):  
Jubran Akram ◽  
David W. Eaton

We have evaluated arrival-time picking algorithms for downhole microseismic data. The picking algorithms that we considered may be classified as window-based single-level methods (e.g., energy-ratio [ER] methods), nonwindow-based single-level methods (e.g., Akaike information criterion), multilevel- or array-based methods (e.g., crosscorrelation approaches), and hybrid methods that combine a number of single-level methods (e.g., Akazawa’s method). We have determined the key parameters for each algorithm and developed recommendations for optimal parameter selection based on our analysis and experience. We evaluated the performance of these algorithms with the use of field examples from a downhole microseismic data set recorded in western Canada as well as with pseudo-synthetic microseismic data generated by adding 100 realizations of Gaussian noise to high signal-to-noise ratio microseismic waveforms. ER-based algorithms were found to be more efficient in terms of computational speed and were therefore recommended for real-time microseismic data processing. Based on the performance on pseudo-synthetic and field data sets, we found statistical, hybrid, and multilevel crosscorrelation methods to be more efficient in terms of accuracy and precision. Pick errors for S-waves are reduced significantly when data are preconditioned by applying a transformation into ray-centered coordinates.


2018 ◽  
Vol 616 ◽  
pp. A174 ◽  
Author(s):  
L. Pentericci ◽  
R. J. McLure ◽  
B. Garilli ◽  
O. Cucciati ◽  
P. Franzetti ◽  
...  

This paper describes the observations and the first data release (DR1) of the ESO public spectroscopic survey “VANDELS, a deep VIMOS survey of the CANDELS CDFS and UDS fields”. The main targets of VANDELS are star-forming galaxies at redshift 2.4 < z < 5.5, an epoch when the Universe had not yet reached 20% of its current age, and massive passive galaxies in the range 1 < z < 2.5. By adopting a strategy of ultra-long exposure times, ranging from a minimum of 20 h to a maximum of 80 h per source, VANDELS is specifically designed to be the deepest-ever spectroscopic survey of the high-redshift Universe. Exploiting the red sensitivity of the refurbished VIMOS spectrograph, the survey is obtaining ultra-deep optical spectroscopy covering the wavelength range 4800–10 000 Å with a sufficiently high signal-to-noise ratio to investigate the astrophysics of high-redshift galaxy evolution via detailed absorption line studies of well-defined samples of high-redshift galaxies. VANDELS-DR1 is the release of all medium-resolution spectroscopic data obtained during the first season of observations, on a 0.2 square degree area centered around the CANDELS-CDFS (Chandra deep-field south) and CANDELS-UDS (ultra-deep survey) areas. It includes data for all galaxies for which the total (or half of the total) scheduled integration time was completed. The DR1 contains 879 individual objects, approximately half in each of the two fields, that have a measured redshift, with the highest reliable redshifts reaching zspec ~ 6. In DR1 we include fully wavelength-calibrated and flux-calibrated 1D spectra, the associated error spectrum and sky spectrum, and the associated wavelength-calibrated 2D spectra. We also provide a catalog with the essential galaxy parameters, including spectroscopic redshifts and redshift quality flags measured by the collaboration. We present the survey layout and observations, the data reduction and redshift measurement procedure, and the general properties of the VANDELS-DR1 sample. In particular, we discuss the spectroscopic redshift distribution and the accuracy of the photometricredshifts for each individual target category, and we provide some examples of data products for the various target typesand the different quality flags. All VANDELS-DR1 data are publicly available and can be retrieved from the ESO archive. Two further data releases are foreseen in the next two years, and a final data release is currently scheduled for June 2020, which will include an improved rereduction of the entire spectroscopic data set.


Sign in / Sign up

Export Citation Format

Share Document