Linear inversion of body‐wave data—Part III: Model parameterization

Geophysics ◽  
1984 ◽  
Vol 49 (12) ◽  
pp. 2088-2093 ◽  
Author(s):  
M. Bée ◽  
R. S. Jacobson

A velocity gradient model parameterized with the tau‐zeta inversion for seismic refraction data is examined with respect to a synthetic traveltime data set. The velocity‐depth model consists of a stack of laterally homogeneous layers, each with a constant velocity gradient. The free model parameters are the velocities of the layer bounds and the number of layers. The best velocity gradient solutions, i.e., with the least deviation from the true model, were obtained from “constrained” models in which the velocities of the layer bounds are the velocities of the observed refracted waves. An arbitrary selection of layer bound velocities was found to be a suboptimal choice of model parameterization for the tau‐zeta inversion. A trade‐off curve between model resolution and solution variance was constructed with the constrained model parameterization from examination of numerous solutions with a diverse number of layers. A constrained model with as many layers as observed data points represents a satisfactory compromise between model resolution and solution variance. Constrained models with more layers than observed data points, however, can increase the resolution of the velocity gradient model. If model resolution is favored over solution variance, a constrained model with many more layers than observed data points is therefore the best model parameterization with the tau‐zeta inversion technique.

2019 ◽  
Vol 31 (3) ◽  
pp. 596-612 ◽  
Author(s):  
DJ Strouse ◽  
David J. Schwab

The information bottleneck (IB) approach to clustering takes a joint distribution [Formula: see text] and maps the data [Formula: see text] to cluster labels [Formula: see text], which retain maximal information about [Formula: see text] (Tishby, Pereira, & Bialek, 1999 ). This objective results in an algorithm that clusters data points based on the similarity of their conditional distributions [Formula: see text]. This is in contrast to classic geometric clustering algorithms such as [Formula: see text]-means and gaussian mixture models (GMMs), which take a set of observed data points [Formula: see text] and cluster them based on their geometric (typically Euclidean) distance from one another. Here, we show how to use the deterministic information bottleneck (DIB) (Strouse & Schwab, 2017 ), a variant of IB, to perform geometric clustering by choosing cluster labels that preserve information about data point location on a smoothed data set. We also introduce a novel intuitive method to choose the number of clusters via kinks in the information curve. We apply this approach to a variety of simple clustering problems, showing that DIB with our model selection procedure recovers the generative cluster labels. We also show that, in particular limits of our model parameters, clustering with DIB and IB is equivalent to [Formula: see text]-means and EM fitting of a GMM with hard and soft assignments, respectively. Thus, clustering with (D)IB generalizes and provides an information-theoretic perspective on these classic algorithms.


2021 ◽  
Author(s):  
Shekoofeh Hedayati ◽  
Brad Wyble

Previous research has shown that Attentional blink (AB) data can differ between tasks, or subjects and it can be challenging to interpret these differences. In this paper, we provided a ready-to-use tool that allows researchers to map their data onto the episodic Simultaneous Type, Serial Token (eSTST) model. This tool uses the Markov Chain Monte Carlo algorithm to find the best set of 3 model parameters to simulate a given AB pattern. These 3 parameters have cognitive interpretations, such that differences in these parameters between different paradigms can be used for inferences about the timing of attentional deployment or the encoding of memory. Additionally, our tool allows for a combination of quantitative fitting against the overall pattern of data points, and qualitative fitting for theoretically important features. We demonstrate the algorithm using several data sets, showing that it can find cognitively interpretable parameter sets for some of them, but fails to find a good fit for one data set. This indicates an explanatory boundary of the eSTST model. Finally, we provide a feature to avoid overfitting of individual data points with high uncertainty, such as in the case of individual participant data.


SPE Journal ◽  
2019 ◽  
Vol 24 (06) ◽  
pp. 2911-2928 ◽  
Author(s):  
Hewei Tang ◽  
William J. Bailey ◽  
Terry Stone ◽  
John Killough

Summary Implementation of a drift–flux (DF) multiphase–flow model within a fully coupled wellbore/reservoir simulator is nontrivial because it must adhere to a number of strict requirements to ensure numerical robustness and convergence. The existing DF model that meets these requirements is only fully posed from 2° (from the horizontal) to upward vertical. Our work attempts to extend the current DF model such that it is numerically robust, accurate, and applicable to all well inclinations. To gauge accuracy, model parameterization used 5,805 experimental data points from a well–established data set, along with a second data set comprising 13,440 data points extracted from the OLGA–S library (Schlumberger 2017b). Forecast accuracy of the proposed model is compared with that of two state–of–the–art DF models (applicable to all inclinations but unsuited for coupled simulation), and it exhibits equivalent or better performance. More significantly, the model is shown to be numerically smooth, continuous, and stable for cocurrent flow when implemented in a fully implicit and coupled wellbore/reservoir simulator.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Simona Babiceanu ◽  
Sanhita Lahiri ◽  
Mena Lockwood

This study uses a suite of performance measures that was developed by taking into consideration various aspects of congestion and reliability, to assess impacts of safety projects on congestion. Safety projects are necessary to help move Virginia’s roadways toward safer operation, but can contribute to congestion and unreliability during execution, and can affect operations after execution. However, safety projects are assessed primarily for safety improvements, not for congestion. This study identifies an appropriate suite of measures, and quantifies and compares the congestion and reliability impacts of safety projects on roadways for the periods before, during, and after project execution. The paper presents the performance measures, examines their sensitivity based on operating conditions, defines thresholds for congestion and reliability, and demonstrates the measures using a set of Virginia safety projects. The data set consists of 10 projects totalling 92 mi and more than 1M data points. The study found that, overall, safety projects tended to have a positive impact on congestion and reliability after completion, and the congestion variability measures were sensitive to the threshold of reliability. The study concludes with practical recommendations for primary measures that may be used to measure overall impacts of safety projects: percent vehicle miles traveled (VMT) reliable with a customized threshold for Virginia; percent VMT delayed; and time to travel 10 mi. However, caution should be used when applying the results directly to other situations, because of the limited number of projects used in the study.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 37
Author(s):  
Shixun Wang ◽  
Qiang Chen

Boosting of the ensemble learning model has made great progress, but most of the methods are Boosting the single mode. For this reason, based on the simple multiclass enhancement framework that uses local similarity as a weak learner, it is extended to multimodal multiclass enhancement Boosting. First, based on the local similarity as a weak learner, the loss function is used to find the basic loss, and the logarithmic data points are binarized. Then, we find the optimal local similarity and find the corresponding loss. Compared with the basic loss, the smaller one is the best so far. Second, the local similarity of the two points is calculated, and then the loss is calculated by the local similarity of the two points. Finally, the text and image are retrieved from each other, and the correct rate of text and image retrieval is obtained, respectively. The experimental results show that the multimodal multi-class enhancement framework with local similarity as the weak learner is evaluated on the standard data set and compared with other most advanced methods, showing the experience proficiency of this method.


2017 ◽  
Vol 37 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Haluk Ay ◽  
Anthony Luscher ◽  
Carolyn Sommerich

Purpose The purpose of this study is to design and develop a testing device to simulate interaction between human hand–arm dynamics, right-angle (RA) computer-controlled power torque tools and joint-tightening task-related variables. Design/methodology/approach The testing rig can simulate a variety of tools, tasks and operator conditions. The device includes custom data-acquisition electronics and graphical user interface-based software. The simulation of the human hand–arm dynamics is based on the rig’s four-bar mechanism-based design and mechanical components that provide adjustable stiffness (via pneumatic cylinder) and mass (via plates) and non-adjustable damping. The stiffness and mass values used are based on an experimentally validated hand–arm model that includes a database of model parameters. This database is with respect to gender and working posture, corresponding to experienced tool operators from a prior study. Findings The rig measures tool handle force and displacement responses simultaneously. Peak force and displacement coefficients of determination (R2) between rig estimations and human testing measurements were 0.98 and 0.85, respectively, for the same set of tools, tasks and operator conditions. The rig also provides predicted tool operator acceptability ratings, using a data set from a prior study of discomfort in experienced operators during torque tool use. Research limitations/implications Deviations from linearity may influence handle force and displacement measurements. Stiction (Coulomb friction) in the overall rig, as well as in the air cylinder piston, is neglected. The rig’s mechanical damping is not adjustable, despite the fact that human hand–arm damping varies with respect to gender and working posture. Deviations from these assumptions may affect the correlation of the handle force and displacement measurements with those of human testing for the same tool, task and operator conditions. Practical implications This test rig will allow the rapid assessment of the ergonomic performance of DC torque tools, saving considerable time in lineside applications and reducing the risk of worker injury. DC torque tools are an extremely effective way of increasing production rate and improving torque accuracy. Being a complex dynamic system, however, the performance of DC torque tools varies in each application. Changes in worker mass, damping and stiffness, as well as joint stiffness and tool program, make each application unique. This test rig models all of these factors and allows quick assessment. Social implications The use of this tool test rig will help to identify and understand risk factors that contribute to musculoskeletal disorders (MSDs) associated with the use of torque tools. Tool operators are subjected to large impulsive handle reaction forces, as joint torque builds up while tightening a fastener. Repeated exposure to such forces is associated with muscle soreness, fatigue and physical stress which are also risk factors for upper extremity injuries (MSDs; e.g. tendinosis, myofascial pain). Eccentric exercise exertions are known to cause damage to muscle tissue in untrained individuals and affect subsequent performance. Originality/value The rig provides a novel means for quantitative, repeatable dynamic evaluation of RA powered torque tools and objective selection of tightening programs. Compared to current static tool assessment methods, dynamic testing provides a more realistic tool assessment relative to the tool operator’s experience. This may lead to improvements in tool or controller design and reduction in associated musculoskeletal discomfort in operators.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U25-U38 ◽  
Author(s):  
Nuno V. da Silva ◽  
Andrew Ratcliffe ◽  
Vetle Vinje ◽  
Graham Conroy

Parameterization lies at the center of anisotropic full-waveform inversion (FWI) with multiparameter updates. This is because FWI aims to update the long and short wavelengths of the perturbations. Thus, it is important that the parameterization accommodates this. Recently, there has been an intensive effort to determine the optimal parameterization, centering the fundamental discussion mainly on the analysis of radiation patterns for each one of these parameterizations, and aiming to determine which is best suited for multiparameter inversion. We have developed a new parameterization in the scope of FWI, based on the concept of kinematically equivalent media, as originally proposed in other areas of seismic data analysis. Our analysis is also based on radiation patterns, as well as the relation between the perturbation of this set of parameters and perturbation in traveltime. The radiation pattern reveals that this parameterization combines some of the characteristics of parameterizations with one velocity and two Thomsen’s parameters and parameterizations using two velocities and one Thomsen’s parameter. The study of perturbation of traveltime with perturbation of model parameters shows that the new parameterization is less ambiguous when relating these quantities in comparison with other more commonly used parameterizations. We have concluded that our new parameterization is well-suited for inverting diving waves, which are of paramount importance to carry out practical FWI successfully. We have demonstrated that the new parameterization produces good inversion results with synthetic and real data examples. In the latter case of the real data example from the Central North Sea, the inverted models show good agreement with the geologic structures, leading to an improvement of the seismic image and flatness of the common image gathers.


Sign in / Sign up

Export Citation Format

Share Document