Fast acquisition of focal mechanism based on statistical analysis

Author(s):  
Marisol Monterrubio-Velasco ◽  
José Carlos Carrasco-Jimenez ◽  
Otilio Rojas ◽  
Juan Esteban Rodríguez ◽  
Josep de la Puente

<p>Earthquake and tsunami early warning systems and post-event urgent computing simulations require of fast and accurate quantification of earthquake parameters such as magnitude, location and Focal Mechanism (FM). Methodologies to estimate earthquake location and magnitude are well-established and in place. However, automatic solutions of FMs are not always provided by operational institutions and are, in some cases, available only after a time-consuming inversion of the wave-forms needed to determine the moment tensor components. This precludes urgent seismic simulations, which aim at providing ground shaking maps with severe time constraints. We propose a new strategy for fast (<60 s) determination of FM based on historical data sets, tested it at five different active seismic regions, Japan, New Zealand, California, Iceland, and Italy. The methodology includes the k-nearest neighbor's algorithm in a spatial dimension domain to search the most similar FMs between the data set. In our research, we focus on moderate to large earthquakes. The comparison algorithm includes the four closest events, and also a hypothetical event building by the median values of strike, dip, and rake of the k-neighbors. The validation stage includes the minimum rotated angle measure to compute the similitude between a pair of FMs. We find three model parameters, such as the minimum number of neighbors, the threshold radius that defines the neighboring sphere, and the magnitude threshold, that could improve the statistical similitude results. Our fast methodology has a 75%-90% agreement with traditional inversion mechanisms, depending on the particular tectonic region and dataset size. Our work is a key component of an urgent computing workflow, where the FM information will be used as input for ground motion simulations. Future work will assess the sensitivity of FM uncertainty in the resulting ground-shaking maps.</p>

2021 ◽  
Author(s):  
Alberto Armigliato ◽  
Martina Zanetti ◽  
Stefano Tinti ◽  
Filippo Zaniboni ◽  
Glauco Gallotti ◽  
...  

<p>It is well known that for earthquake-generated tsunamis impacting near-field coastlines the focal mechanism, the position of the fault with respect to the coastline and the on fault slip distribution are key factors in determining the efficiency of the generation process and the distribution of the maximum run-up and inundation along the nearby coasts. The time needed to obtain the aforementioned information from the analysis of seismic records is usually too long compared to the time required to issue a timely tsunami warning/alert to the nearest coastlines. In the context of tsunami early warning systems, a big challenge is hence to be able to define 1) the relative position of the hypocenter and of the fault and 2) the earthquake focal mechanism, based only on the preliminary earthquake localization and magnitude estimation, which are made available by seismic networks soon after the earthquake occurs.</p><p>In this study, the intrinsic unpredictability of the position of the hypocenter on the fault plane is studied through a probabilistic approach based on the analysis of two finite fault model datasets (SRCMOD and USGS) and by limiting the analysis to moderate-to-large shallow earthquakes (Mw  6 and depth  50 km). After a proper homogenization procedure needed to define a common geometry for all samples in the two datasets, the hypocentral positions are fitted with different probability density functions (PDFs) separately in the along-dip and along-strike directions.</p><p>Regarding the focal mechanism determination, different approaches have been tested: the most successful is restricted to subduction-type earthquakes. It defines average values and uncertainties for strike, dip and rake angles based on a combination of a proper zonation of the main tsunamigenic subduction areas worldwide and of subduction zone geometries available from publicdatabases.</p><p>The general workflow that we propose can be schematically outlined as follows. Once an earthquake occurs and the magnitude and hypocentral solutions are made available by seismic networks, it is possible to assign the focal mechanism by selecting the characteristic values for strike, dip and rake of the zone where the hypocenter falls into. Fault length and width, as well as the slip distribution on the fault plane, are computed through regression laws against magnitude proposed by previous studies. The resulting rectangular fault plane can be discretized into a matrix of subfaults: the position of the center of each subfault can be considered as a “realization” of the hypocenter position, which can then be assigned a probability. In this way, we can define a number of earthquake fault scenarios, each of which is assigned a probability, and we can run tsunami numerical simulations for each scenario to quantify the classical observables, such as water elevation time series in selected offshore/coastal tide-gauges, flow depth, run-up, inundation distance. The final results can be provided as probabilistic distributions of the different observables.</p><p>The general approach, which is still in a proof-of-concept stage, is applied to the 16 September 2015 Illapel (Chile) tsunamigenic earthquake (Mw = 8.2). The comparison with the available tsunami observations is discussed with special attention devoted to the early-warning perspective.</p>


Geophysics ◽  
2020 ◽  
pp. 1-74
Author(s):  
Han Li ◽  
Xu Chang ◽  
Xiao-Bi Xie ◽  
Yibo Wang

Through the study of microseismic focal mechanisms, information such as fracture orientation, event magnitude, and in-situ stress status can be quantitatively obtained, thus, providing a reliable basis for unconventional oil and gas exploration. Most source inversion methods assume that the medium is isotropic. However, hydraulic fracturing is usually conducted in sedimentary rocks, which often exhibit strong anisotropy. Neglecting this anisotropy may cause errors in focal mechanism inversion results. We propose a microseismic focal mechanism inversion method that considers velocity anisotropy in a vertically transverse isotropic (VTI) medium. To generate synthetic data, we adopt the moment-tensor model to represent microearthquake sources. We use a staggered-grid finite-difference (SGFD) method to calculate synthetic seismograms in anisotropic media. We perform seismic moment-tensor (SMT) inversion with only P-waves by matching synthetic and observed waveforms. Both synthetic and field datasets are used to test the inversion method. For the field dataset, we investigate the inversion stability using randomly selected partial datasets in the calculation. We pay special attention to analyze the sensitivity of the inversion. We test and evaluate the impact of noise in the data and errors in the model parameters ( VP0, ε, and δ) on the SMT inversion using synthetic datasets. The results indicate that for a surface acquisition system, the proposed method can tolerate moderate noise in the data, and deviations in the anisotropy parameters can cause errors in the SMT inversion, especially for dip-slip events and the inverted percentages of non-double-couple components. According to our study, including anisotropy in the model is important to obtain reliable non-double-couple components of moment tensors for hydraulic fracturing induced microearthquakes.


2020 ◽  
Vol 36 (2) ◽  
pp. 700-717 ◽  
Author(s):  
Damian N Grant

A parametric mathematical form of vulnerability function is developed that gives a full probabilistic description of losses as a function of earthquake ground shaking intensity. The model is intended to be used with any loss measure that can take values between 0% and 100%, inclusive, including normalized financial losses (damage ratios), human casualty rates, or debris cover. It is a mixed discrete-continuous probability distribution, in that it assigns a discrete probability mass to experiencing exactly 0% or 100% loss, and a continuous probability density to values in between. The model can be used with empirical or analytical loss data. Two possible regression approaches are presented and Monte Carlo analysis is used to demonstrate that the regressions give unbiased estimates of the model parameters. Finally, the model is applied to a data set of debris cover percentages estimated from detailed finite element analysis of Dutch unreinforced masonry buildings.


2021 ◽  
Author(s):  
Enrico Baglione ◽  
Alessandro Amato ◽  
Beatriz Brizuela ◽  
Hafize Basak Bayraktar ◽  
Stefano Lorito ◽  
...  

<p>We present a tsunami source solution for the 2nd May 2020, Mw 6.6 earthquake that occurred about 80 km offshore south of Crete on the shallow portion of the Hellenic Arc Subduction Zone (HASZ). This earthquake generated a small local tsunami recorded by the Ierapetra tide gauge on Crete island's southern coast. We used these single-marigram data to constrain the main features of the causative rupture. We modelled synthetic tsunami waveforms and measured their misfits with the observed data for each set of source parameters, scanned systematically around the values constrained by some of the available moment tensors.</p><p>In the attempts to discriminate between the two auxiliary fault planes of the moment tensor solutions, our results identify a shallow highly-dipping back-thrust fault as the source of this earthquake with the lower misfit. However, a rupture on a lower angle fault, possibly a splay fault of the subduction interface, with a sinistral component due to the oblique convergence on this segment of the HASZ, cannot be ruled out.</p><p>These results are relevant in the framework of the tsunami hazard assessments and Tsunami Early Warning Systems. In these frameworks, in addition to the subduction interface and possible ruptures on splay faults, other rupture types, such as those on secondary structures of the considered subduction system, cannot be excluded a priori. This circumstance bears important consequences because, as well as splay faulting, back thrust faulting might enhance the tsunamigenic potential where the subduction itself is less tsunamigenic due to the oblique convergence.</p>


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
D Spallarossa ◽  
M Cattaneo ◽  
D Scafidi ◽  
M Michele ◽  
L Chiaraluce ◽  
...  

Summary The 2016–17 central Italy earthquake sequence began with the first mainshock near the town of Amatrice on August 24 (MW 6.0), and was followed by two subsequent large events near Visso on October 26 (MW 5.9) and Norcia on October 30 (MW 6.5), plus a cluster of 4 events with MW > 5.0 within few hours on January 18, 2017. The affected area had been monitored before the sequence started by the permanent Italian National Seismic Network (RSNC), and was enhanced during the sequence by temporary stations deployed by the National Institute of Geophysics and Volcanology and the British Geological Survey. By the middle of September, there was a dense network of 155 stations, with a mean separation in the epicentral area of 6–10 km, comparable to the most likely earthquake depth range in the region. This network configuration was kept stable for an entire year, producing 2.5 TB of continuous waveform recordings. Here we describe how this data was used to develop a large and comprehensive earthquake catalogue using the Complete Automatic Seismic Processor (CASP) procedure. This procedure detected more than 450,000 events in the year following the first mainshock, and determined their phase arrival times through an advanced picker engine (RSNI-Picker2), producing a set of about 7 million P- and 10 million S-wave arrival times. These were then used to locate the events using a non-linear location (NLL) algorithm, a 1D velocity model calibrated for the area, and station corrections and then to compute their local magnitudes (ML). The procedure was validated by comparison of the derived data for phase picks and earthquake parameters with a handpicked reference catalogue (hereinafter referred to as ‘RefCat’). The automated procedure takes less than 12 hours on an Intel Core-i7 workstation to analyse the primary waveform data and to detect and locate 3000 events on the most seismically active day of the sequence. This proves the concept that the CASP algorithm can provide effectively real-time data for input into daily operational earthquake forecasts, The results show that there have been significant improvements compared to RefCat obtained in the same period using manual phase picks. The number of detected and located events is higher (from 84,401 to 450,000), the magnitude of completeness is lower (from ML 1.4 to 0.6), and also the number of phase picks is greater with an average number of 72 picked arrival for a ML = 1.4 compared with 30 phases for RefCat using manual phase picking. These propagate into formal uncertainties of ± 0.9km in epicentral location and ± 1.5km in depth for the enhanced catalogue for the vast majority of the events. Together, these provide a significant improvement in the resolution of fine structures such as local planar structures and clusters, in particular the identification of shallow events occurring in parts of the crust previously thought to be inactive. The lower completeness magnitude provides a rich data set for development and testing of analysis techniques of seismic sequences evolution, including real-time, operational monitoring of b-value, time-dependent hazard evaluation and aftershock forecasting.


2017 ◽  
Vol 37 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Haluk Ay ◽  
Anthony Luscher ◽  
Carolyn Sommerich

Purpose The purpose of this study is to design and develop a testing device to simulate interaction between human hand–arm dynamics, right-angle (RA) computer-controlled power torque tools and joint-tightening task-related variables. Design/methodology/approach The testing rig can simulate a variety of tools, tasks and operator conditions. The device includes custom data-acquisition electronics and graphical user interface-based software. The simulation of the human hand–arm dynamics is based on the rig’s four-bar mechanism-based design and mechanical components that provide adjustable stiffness (via pneumatic cylinder) and mass (via plates) and non-adjustable damping. The stiffness and mass values used are based on an experimentally validated hand–arm model that includes a database of model parameters. This database is with respect to gender and working posture, corresponding to experienced tool operators from a prior study. Findings The rig measures tool handle force and displacement responses simultaneously. Peak force and displacement coefficients of determination (R2) between rig estimations and human testing measurements were 0.98 and 0.85, respectively, for the same set of tools, tasks and operator conditions. The rig also provides predicted tool operator acceptability ratings, using a data set from a prior study of discomfort in experienced operators during torque tool use. Research limitations/implications Deviations from linearity may influence handle force and displacement measurements. Stiction (Coulomb friction) in the overall rig, as well as in the air cylinder piston, is neglected. The rig’s mechanical damping is not adjustable, despite the fact that human hand–arm damping varies with respect to gender and working posture. Deviations from these assumptions may affect the correlation of the handle force and displacement measurements with those of human testing for the same tool, task and operator conditions. Practical implications This test rig will allow the rapid assessment of the ergonomic performance of DC torque tools, saving considerable time in lineside applications and reducing the risk of worker injury. DC torque tools are an extremely effective way of increasing production rate and improving torque accuracy. Being a complex dynamic system, however, the performance of DC torque tools varies in each application. Changes in worker mass, damping and stiffness, as well as joint stiffness and tool program, make each application unique. This test rig models all of these factors and allows quick assessment. Social implications The use of this tool test rig will help to identify and understand risk factors that contribute to musculoskeletal disorders (MSDs) associated with the use of torque tools. Tool operators are subjected to large impulsive handle reaction forces, as joint torque builds up while tightening a fastener. Repeated exposure to such forces is associated with muscle soreness, fatigue and physical stress which are also risk factors for upper extremity injuries (MSDs; e.g. tendinosis, myofascial pain). Eccentric exercise exertions are known to cause damage to muscle tissue in untrained individuals and affect subsequent performance. Originality/value The rig provides a novel means for quantitative, repeatable dynamic evaluation of RA powered torque tools and objective selection of tightening programs. Compared to current static tool assessment methods, dynamic testing provides a more realistic tool assessment relative to the tool operator’s experience. This may lead to improvements in tool or controller design and reduction in associated musculoskeletal discomfort in operators.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U25-U38 ◽  
Author(s):  
Nuno V. da Silva ◽  
Andrew Ratcliffe ◽  
Vetle Vinje ◽  
Graham Conroy

Parameterization lies at the center of anisotropic full-waveform inversion (FWI) with multiparameter updates. This is because FWI aims to update the long and short wavelengths of the perturbations. Thus, it is important that the parameterization accommodates this. Recently, there has been an intensive effort to determine the optimal parameterization, centering the fundamental discussion mainly on the analysis of radiation patterns for each one of these parameterizations, and aiming to determine which is best suited for multiparameter inversion. We have developed a new parameterization in the scope of FWI, based on the concept of kinematically equivalent media, as originally proposed in other areas of seismic data analysis. Our analysis is also based on radiation patterns, as well as the relation between the perturbation of this set of parameters and perturbation in traveltime. The radiation pattern reveals that this parameterization combines some of the characteristics of parameterizations with one velocity and two Thomsen’s parameters and parameterizations using two velocities and one Thomsen’s parameter. The study of perturbation of traveltime with perturbation of model parameters shows that the new parameterization is less ambiguous when relating these quantities in comparison with other more commonly used parameterizations. We have concluded that our new parameterization is well-suited for inverting diving waves, which are of paramount importance to carry out practical FWI successfully. We have demonstrated that the new parameterization produces good inversion results with synthetic and real data examples. In the latter case of the real data example from the Central North Sea, the inverted models show good agreement with the geologic structures, leading to an improvement of the seismic image and flatness of the common image gathers.


Sign in / Sign up

Export Citation Format

Share Document