Robust Empirical Time–Frequency Relations for Seismic Spectral Amplitudes, Part 2: Model Uncertainty and Optimal Parameterization

Author(s):  
Maryam Safarshahi ◽  
Igor B. Morozov

ABSTRACT In a companion article, Safarshahi and Morozov (2020) argued that construction of distance- and frequency-dependent models for seismic-wave amplitudes should include four general elements: (1) a sufficiently detailed (parametric or nonparametric) model of frequency-independent spreading, capturing all essential features of observations; (2) model parameters with well-defined and nonoverlapping physical meanings; (3) joint inversion for multiple parameters, including the geometrical spreading, Q, κ, and source and receiver couplings; and (4) the use of additional dataset-specific criteria of model quality, while fitting the logarithms of seismic amplitudes. Some of these elements are present in existing models, but, taken together, they are poorly understood and require an integrated approach. Such an approach was illustrated by detailed analysis of an S-wave amplitude dataset from southern Iran. The resulting model is based on a frequency-independent Q, and matches the data closer than conventional models and across the entire epicentral-distance range. Here, we complete the analysis of this model by evaluating the uncertainties and trade-offs of its parameters. Two types of trade-offs are differentiated: one caused by a (possibly) limited model parameterization and the second due to statistical data errors. Data bootstrapping shows that with adequate parameterization, attenuation properties Q, κ, and geometrical spreading parameters are resolved well and show moderate trade-offs due to measurement errors. Using the principal component analysis of these trade-offs, an optimal (trade-off free) parameterization of seismic amplitudes is obtained. By contrast, when assuming theoretical values for certain model parameters and using multistep inversion procedures (as commonly done), parameter trade-offs increase dramatically and become difficult to assess. In particular, the frequency-dependent Q correlates with the distribution of the source and receiver-site factors, and also with biases in the resulting median data residuals. In the new model, these trade-offs are removed using an improved parameterization of geometrical spreading, constant Q, and model quality constraints.

Author(s):  
Maryam Safarshahi ◽  
Igor B. Morozov

ABSTRACT Empirical models of geometrical-, Q-, t-star, and kappa-type attenuation of seismic waves and ground-motion prediction equations (GMPEs) are viewed as cases of a common empirical standard model describing variation of wave amplitudes with time and frequency. Compared with existing parametric and nonparametric approaches, several new features are included in this model: (1) flexible empirical parameterization with possible nonmonotonous time or distance dependencies; (2) joint inversion for time or distance and frequency dependencies, source spectra, site responses, kappas, and Q; (3) additional constraints removing spurious correlations of model parameters and data residuals with source–receiver distances and frequencies; (4) possible kappa terms for sources as well as for receivers; (5) orientation-independent horizontal- and three-component amplitudes; and (6) adaptive filtering to reduce noise effects. The approach is applied to local and regional S-wave amplitudes in southeastern Iran. Comparisons with previous studies show that conventional attenuation models often contain method-specific biases caused by limited parameterizations of frequency-independent amplitude decays and assumptions about the models, such as smoothness of amplitude variations. Without such assumptions, the frequency-independent spreading of S waves is much faster than inferred by conventional modeling. For example, transverse-component amplitudes decrease with travel time t as about t−1.8 at distances closer than 90 km and as t−2.5 beyond 115 km. The rapid amplitude decay at larger distances could be caused by scattering within the near surface. From about 90 to 115 km distances, the amplitude increases by a factor of about 3, which could be due to reflections from the Moho and within the crust. With more accurate geometrical-spreading and kappa models, the Q factor for the study area is frequency independent and exceeds 2000. The frequency-independent and Q-type attenuation for vertical-component and multicomponent amplitudes is somewhat weaker than for the horizontal components. These observations appear to be general and likely apply to other areas.


Author(s):  
Geir Evensen

AbstractIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.


Author(s):  
Fabio Sabetta ◽  
Antonio Pugliese ◽  
Gabriele Fiorentino ◽  
Giovanni Lanzano ◽  
Lucia Luzi

AbstractThis work presents an up-to-date model for the simulation of non-stationary ground motions, including several novelties compared to the original study of Sabetta and Pugliese (Bull Seism Soc Am 86:337–352, 1996). The selection of the input motion in the framework of earthquake engineering has become progressively more important with the growing use of nonlinear dynamic analyses. Regardless of the increasing availability of large strong motion databases, ground motion records are not always available for a given earthquake scenario and site condition, requiring the adoption of simulated time series. Among the different techniques for the generation of ground motion records, we focused on the methods based on stochastic simulations, considering the time- frequency decomposition of the seismic ground motion. We updated the non-stationary stochastic model initially developed in Sabetta and Pugliese (Bull Seism Soc Am 86:337–352, 1996) and later modified by Pousse et al. (Bull Seism Soc Am 96:2103–2117, 2006) and Laurendeau et al. (Nonstationary stochastic simulation of strong ground-motion time histories: application to the Japanese database. 15 WCEE Lisbon, 2012). The model is based on the S-transform that implicitly considers both the amplitude and frequency modulation. The four model parameters required for the simulation are: Arias intensity, significant duration, central frequency, and frequency bandwidth. They were obtained from an empirical ground motion model calibrated using the accelerometric records included in the updated Italian strong-motion database ITACA. The simulated accelerograms show a good match with the ground motion model prediction of several amplitude and frequency measures, such as Arias intensity, peak acceleration, peak velocity, Fourier spectra, and response spectra.


Author(s):  
Mohammad-Reza Ashory ◽  
Farhad Talebi ◽  
Heydar R Ghadikolaei ◽  
Morad Karimpour

This study investigated the vibrational behaviour of a rotating two-blade propeller at different rotational speeds by using self-tracking laser Doppler vibrometry. Given that a self-tracking method necessitates the accurate adjustment of test setups to reduce measurement errors, a test table with sufficient rigidity was designed and built to enable the adjustment and repair of test components. The results of the self-tracking test on the rotating propeller indicated an increase in natural frequency and a decrease in the amplitude of normalized mode shapes as rotational speed increases. To assess the test results, a numerical model created in ABAQUS was used. The model parameters were tuned in such a way that the natural frequency and associated mode shapes were in good agreement with those derived using a hammer test on a stationary propeller. The mode shapes obtained from the hammer test and the numerical (ABAQUS) modelling were compared using the modal assurance criterion. The examination indicated a strong resemblance between the hammer test results and the numerical findings. Hence, the model can be employed to determine the other mechanical properties of two-blade propellers in test scenarios.


Author(s):  
Giuseppe Vannini ◽  
Manish R. Thorat ◽  
Dara W. Childs ◽  
Mirko Libraschi

A numerical model developed by Thorat & Childs [1] has indicated that the conventional frequency independent model for labyrinth seals is invalid for rotor surface velocities reaching a significant fraction of Mach 1. A theoretical one-control-volume (1CV) model based on a leakage equation that yields a reasonably good comparison with experimental results is considered in the present analysis. The numerical model yields frequency-dependent rotordynamic coefficients for the seal. Three real centrifugal compressors are analyzed to compare stability predictions with and without frequency-dependent labyrinth seal model. Three different compressor services are selected to have a comprehensive scenario in terms of pressure and molecular weight (MW). The molecular weight is very important for Mach number calculation and consequently for the frequency dependent nature of the coefficients. A hydrogen recycle application with MW around 8, a natural gas application with MW around 18, and finally a propane application with molecular weight around 44 are selected for this comparison. Useful indications on the applicability range of frequency dependent coefficients are given.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


Author(s):  
Tomáš Gedeon ◽  
Lisa Davis ◽  
Katelyn Weber ◽  
Jennifer Thorenson

In this paper, we study the limitations imposed on the transcription process by the presence of short ubiquitous pauses and crowding. These effects are especially pronounced in highly transcribed genes such as ribosomal genes (rrn) in fast growing bacteria. Our model indicates that the quantity and duration of pauses reported for protein-coding genes is incompatible with the average elongation rate observed in rrn genes. When maximal elongation rate is high, pause-induced traffic jams occur, increasing promoter occlusion, thereby lowering the initiation rate. This lowers average transcription rate and increases average transcription time. Increasing maximal elongation rate in the model is insufficient to match the experimentally observed average elongation rate in rrn genes. This suggests that there may be rrn-specific modifications to RNAP, which then experience fewer pauses, or pauses of shorter duration than those in protein-coding genes. We identify model parameter triples (maximal elongation rate, mean pause duration time, number of pauses) which are compatible with experimentally observed elongation rates. Average transcription time and average transcription rate are the model outputs investigated as proxies for cell fitness. These fitness functions are optimized for different parameter choices, opening up a possibility of differential control of these aspects of the elongation process, with potential evolutionary consequences. As an example, a gene’s average transcription time may be crucial to fitness when the surrounding medium is prone to abrupt changes. This paper demonstrates that a functional relationship among the model parameters can be estimated using a standard statistical analysis, and this functional relationship describes the various trade-offs that must be made in order for the gene to control the elongation process and achieve a desired average transcription time. It also demonstrates the robustness of the system when a range of maximal elongation rates can be balanced with transcriptional pause data in order to maintain a desired fitness.


2021 ◽  
Author(s):  
Matteo Berti ◽  
Alessandro Simoni

<p>Rainfall is the most significant factor for debris flows triggering. Water is needed to saturate the soil, initiate the sediment motion (regardless of the mobilization mechanism) and transform the solid debris into a fluid mass that can move rapidly downslope. This water is commonly provided by rainfall or rainfall and snowmelt. Consequently, most warning systems rely on the use of rainfall thresholds to predict debris flow occurrence. Debris flows thresholds are usually empirically-derived from the rainfall records that caused past debris flows in a certain area, using a combination of selected precipitation measurements (such as event rainfall P, duration D, or average intensity I) that describe critical rainfall conditions. Recent years have also seen a growing interest in the use of coupled hydrological and slope stability models to derive physically-based thresholds for shallow landslide initiation.</p><p>In both cases, rainfall thresholds are affected by significant uncertainty. Sources of uncertainty include: measurement errors; spatial variability of the rainfall field; incomplete or uncertain debris flow inventory; subjective definition of the “rainfall event”; use of subjective criteria to define the critical conditions; uncertainty in model parameters (for physically-based approaches). Rainfall measurement is widely recognized as a main source of uncertainty due to the extreme time-space variability that characterize intense rainfall events in mountain areas. However, significant errors can also arise by inaccurate information reported in landslide inventories on the timing of debris flows, or by the criterion used to define triggering intensities.</p><p>This study analyzes the common sources of uncertainty associated to rainfall thresholds for debris flow occurrence and discusses different methods to quantify them. First, we give an overview of the various approaches used in the literature to measure the uncertainty caused by random errors or procedural defects. These approaches are then applied to debris flows using real data collected in the Dolomites (Northen Alps, Itay), in order to estimate the variabilty of each single factor (precipitation, triggering timing, triggering intensity..). Individual uncertainties are then combined to obtain the overall uncertain of the rainfall threshold, which can be calculated using the classical method of “summation in quadrature” or a more effective approach based on Monte Carlo simulations. The uncertainty budget allows to identify the biggest contributors to the final variability and it is also useful to understand if this variability can be reduced to make our thresholds more precise.</p><p> </p>


Sign in / Sign up

Export Citation Format

Share Document