scholarly journals Mathematical Standard-Parameters Dual Optimization for Metal Hip Arthroplasty Wear Modelling with Medical Physics Applications

Standards ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 53-66
Author(s):  
Francisco Casesnoves

Total hip metal arthroplasty (THA) constitutes an important proportion of the standard clinical hip implant usage in Medical Physics and Biomedical Engineering. A computational nonlinear optimization is performed with two commonly metal materials in Metal-on-Metal (MoM) THA. Namely, Cast Co-Cr Alloy and Titanium. The principal result is the numerical determination of the K adimensional-constant parameter of the model. Results from a new more powerful algorithm than previous contributions, show significant improvements. Numerical standard figures for dual optimization give acceptable model-parameter values with low residuals. These results are demonstrated with 2D and 3D Graphical/Interior Optimization also. According to the findings/calculations, the standard optimized metal-model parameters are mathematically proven and verified. Mathematical consequences are obtained for model improvements and in vitro simulation methodology. The wear magnitude for in vitro determinations with these model parameter data constitute the innovation of the method. In consequence, the erosion prediction for laboratory experimental testing in THA adds valuable information to the literature. Applications lead to medical physics improvements for material/metal-THA designs.

Total hip metal arthroplasty (THA) model-parameters for a group of commonly used ones is optimized and numerically studied. Based on previous ceramic THA optimization software contributions, an improved multiobjective programming method/algorithm is implemented in wear modeling for THA. This computational nonlinear multifunctional optimization is performed with a number of THA metals with different hardnesses and erosion in vitro experimental rates. The new software was created/designed with two types of Sytems, Matlab and GNU Octave. Numerical results show be improved/acceptable for in vitro simulations. These findings are verified with 2D Graphical Optimization and 3D Interior Optimization methods, giving low residual-norms. The solutions for the model match mostly the literature in vitro standards for experimental simulations. Numerical figures for multifunctional optimization give acceptable model-parameter values with low residual-norms. Useful mathematical consequences/calculations are obtained for wear predictions, model advancements and simulation methodology. The wear magnitude for in vitro determinations with these model parameter data constitutes the advance of the method. In consequence, the erosion prediction for laboratory experimental testing in THA add up to the literature an efficacious usage-improvement. Results, additionally, are extrapolated to efficient Medical Physics applications and metal-THA Bioengineering designs.


2011 ◽  
Vol 24 (5) ◽  
pp. 1480-1498 ◽  
Author(s):  
Andrew H. MacDougall ◽  
Gwenn E. Flowers

Abstract Modeling melt from glaciers is crucial to assessing regional hydrology and eustatic sea level rise. The transferability of such models in space and time has been widely assumed but rarely tested. To investigate melt model transferability, a distributed energy-balance melt model (DEBM) is applied to two small glaciers of opposing aspects that are 10 km apart in the Donjek Range of the St. Elias Mountains, Yukon Territory, Canada. An analysis is conducted in four stages to assess the transferability of the DEBM in space and time: 1) locally derived model parameter values and meteorological forcing variables are used to assess model skill; 2) model parameter values are transferred between glacier sites and between years of study; 3) measured meteorological forcing variables are transferred between glaciers using locally derived parameter values; 4) both model parameter values and measured meteorological forcing variables are transferred from one glacier site to the other, treating the second glacier site as an extension of the first. The model parameters are transferable in time to within a <10% uncertainty in the calculated surface ablation over most or all of a melt season. Transferring model parameters or meteorological forcing variables in space creates large errors in modeled ablation. If select quantities (ice albedo, initial snow depth, and summer snowfall) are retained at their locally measured values, model transferability can be improved to achieve ≤15% uncertainty in the calculated surface ablation.


2021 ◽  
Author(s):  
Rodrigo FO Pena ◽  
Horacio G. Rotstein

Neuronal systems are subject to rapidly fluctuations both intrinsically and externally. In mathematical models, these fluctuations are typically incorporated as stochastic noise (e.g., Gaussian white or colored noise). Noise can be both disruptive and constructive, for example, by creating irregularities and variability in otherwise regular patterns or by creating oscillatory patterns and increasing the signal coherence, respectively. The dynamic mechanisms underlying the interactions between rapidly fluctuating signals and the intrinsic properties of the target cells to produce variable and/or coherent responses are not fully understood. In particular, it is not clear what properties of the target cell's intrinsic dynamics control these interactions and whether the generation of this phenomena requires stochasticity of the input signal and, if yes, to what degree. In this paper we investigate these issues by using linearized and non-linear conductance-based models and piecewise constant (PWC) inputs with short duration pieces and variable amplitudes, which are arbitrarily, but not necessarily stochastically distributed. The amplitude distributions of the constant pieces consist of arbitrary permutations of a baseline PWC function with monotonically increasing amplitudes. In each trial within a given protocol we use one of these permutations and each protocol consists of a subset of all possible permutations, which is the only source of uncertainty in the protocol. We show that sustained oscillatory behavior can be generated in response to additive and multiplicative PWC inputs in both linear and nonlinear systems, independently of whether the stable equilibria of the corresponding unperturbed systems are foci (exhibiting damped oscillations) or nodes (exhibiting overshoots). The oscillatory responses are amplified by the model nonlinearities and attenuated for conductance-based PWC inputs as compared to current-based PWC inputs, consistent with previous theoretical and experimental work. In addition, the responses to PWC inputs exhibited variability across trials, which is reminiscent of the variability generated by stochastic noise (e.g., Gaussian white noise). This variability was modulated by the model parameters and the type of cellular intrinsic dynamics. Our analysis demonstrates that both oscillations and variability are the result of the interaction between the PWC input and the autonomous transient dynamics with little to no contribution from the dynamics around the steady-state. The generation of oscillations and variability does not require input stochasticity, but rather the sequential activation of the transient responses to abrupt changes in constant inputs. Each piece with the same amplitude evokes different responses across trials due to the differences in initial conditions in the corresponding regime. These initial conditions are determined by the value of the voltage at the end of the previous regime, which is different for different trials.The predictions made in this papers are amenable for experimental testing both in vitro and in vivo.


1998 ◽  
Vol 14 (3) ◽  
pp. 276-291 ◽  
Author(s):  
James C. Martin ◽  
Douglas L. Milliken ◽  
John E. Cobb ◽  
Kevin L. McFadden ◽  
Andrew R. Coggan

This investigation sought to determine if cycling power could be accurately modeled. A mathematical model of cycling power was derived, and values for each model parameter were determined. A bicycle-mounted power measurement system was validated by comparison with a laboratory ergometer. Power was measured during road cycling, and the measured values were compared with the values predicted by the model. The measured values for power were highly correlated (R2= .97) with, and were not different than, the modeled values. The standard error between the modeled and measured power (2.7 W) was very small. The model was also used to estimate the effects of changes in several model parameters on cycling velocity. Over the range of parameter values evaluated, velocity varied linearly (R2> .99). The results demonstrated that cycling power can be accurately predicted by a mathematical model.


2021 ◽  
Author(s):  
Baki Harish ◽  
Sandeep Chinta ◽  
Chakravarthy Balaji ◽  
Balaji Srinivasan

<p>The Indian subcontinent is prone to tropical cyclones that originate in the Indian Ocean and cause widespread destruction to life and property. Accurate prediction of cyclone track, landfall, wind, and precipitation are critical in minimizing damage. The Weather Research and Forecast (WRF) model is widely used to predict tropical cyclones. The accuracy of the model prediction depends on initial conditions, physics schemes, and model parameters. The parameter values are selected empirically by scheme developers using the trial and error method, implying that the parameter values are sensitive to climatological conditions and regions. The number of tunable parameters in the WRF model is about several hundred, and calibrating all of them is highly impossible since it requires thousands of simulations. Therefore, sensitivity analysis is critical to screen out the parameters that significantly impact the meteorological variables. The Sobol’ sensitivity analysis method is used to identify the sensitive WRF model parameters. As this method requires a considerable amount of samples to evaluate the sensitivity adequately, machine learning algorithms are used to construct surrogate models trained using a limited number of samples. They could help generate a vast number of required pseudo-samples. Five machine learning algorithms, namely, Gaussian Process Regression (GPR), Support Vector Machine, Regression Tree, Random Forest, and K-Nearest Neighbor, are considered in this study. Ten-fold cross-validation is used to evaluate the surrogate models constructed using the five algorithms and identify the robust surrogate model among them. The samples generated from this surrogate model are then used by the Sobol’ method to evaluate the WRF model parameter sensitivity.</p>


2013 ◽  
Vol 16 (2) ◽  
pp. 392-406 ◽  
Author(s):  
Gift Dumedah ◽  
Paulin Coulibaly

Data assimilation has allowed hydrologists to account for imperfections in observations and uncertainties in model estimates. Typically, updated members are determined as a compromised merger between observations and model predictions. The merging procedure is conducted in decision space before model parameters are updated to reflect the assimilation. However, given the dynamics between states and model parameters, there is limited guarantee that when updated parameters are applied into measurement models, the resulting estimate will be the same as the updated estimate. To account for these challenges, this study uses evolutionary data assimilation (EDA) to estimate streamflow in gauged and ungauged watersheds. EDA assimilates daily streamflow into a Sacramento soil moisture accounting model to determine updated members for eight watersheds in southern Ontario, Canada. The updated members are combined to estimate streamflow in ungauged watersheds where the results show high estimation accuracy for gauged and ungauged watersheds. An evaluation of the commonalities in model parameter values across and between gauged and ungauged watersheds underscore the critical contributions of consistent model parameter values. The findings show a high degree of commonality in model parameter values such that members of a given gauged/ungauged watershed can be estimated using members from another watershed.


2017 ◽  
Vol 21 (11) ◽  
pp. 5663-5679 ◽  
Author(s):  
Björn Guse ◽  
Matthias Pfannerstill ◽  
Abror Gafurov ◽  
Jens Kiesel ◽  
Christian Lehr ◽  
...  

Abstract. In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE), Kling–Gupta efficiency (KGE) and its three components (alpha, beta and r) as well as RSR (the ratio of the root mean square error to the standard deviation) for different segments of the flow duration curve (FDC) are calculated. With a joint analysis of two regression tree (RT) approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria is found for low- and mid-flow conditions. Moreover, the RT analyses emphasise the benefit of an individual analysis of the three components of KGE and of the FDC segments. Furthermore, the RT analyses highlight under which conditions these performance criteria provide insights into precise parameter identification. Our results show that separate performance criteria are required to identify dominant parameters on low- and mid-flow conditions, whilst the number of required performance criteria for high flows increases with increasing process complexity in the catchment. Overall, the analysis of the connective strength between model parameters and performance criteria using RTs contribute to a more realistic handling of parameters and performance criteria in hydrological modelling.


2010 ◽  
Vol 20 (03) ◽  
pp. 193-207 ◽  
Author(s):  
IVAN TYUKIN ◽  
ERIK STEUR ◽  
HENK NIJMEIJER ◽  
DAVID FAIRHURST ◽  
INSEON SONG ◽  
...  

We consider the problem of how to recover the state and parameter values of typical model neurons, such as Hindmarsh-Rose, FitzHugh-Nagumo, Morris-Lecar, from in-vitro measurements of membrane potentials. In control theory, in terms of observer design, model neurons qualify as locally observable. However, unlike most models traditionally addressed in control theory, no parameter-independent diffeomorphism exists, such that the original model equations can be transformed into adaptive canonic observer form. For a large class of model neurons, however, state and parameter reconstruction is possible nevertheless. We propose a method which, subject to mild conditions on the richness of the measured signal, allows model parameters and state variables to be reconstructed up to an equivalence class.


Dependability ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 9-17 ◽  
Author(s):  
A. V. Antonov ◽  
V. A. Chepurko ◽  
A. N. Cherniaev

Aim. Common cause failures (CCFs) are dependent failures of groups of certain elements that occur simultaneously or within a short period of time (i.e. almost simultaneously) due to a single common cause (e.g. a sudden change of climatic operating conditions, flooding of premises, etc.). A dependent failure is a multiple failure of several elements of a system, whose probability cannot be expressed as a simple product of the probabilities of unconditional failures of individual elements. ССА probabilities calculation uses a number of common models, i.e. the Greek letter model, alpha, beta factor and their variants. The beta-factor model is the most simple in terms of simulation of dependent failures and further dependability calculations. Other models, when used in simulation, involve combinatorial enumeration of dependent events in a group of n events that becomes labour-intensive if the number n is high. For the selected structure diagrams of dependability, the paper analyzes the calculation method of system failure probability with CCF taken into account for the beta-factor model. The Aim of the paper is to thoroughly analyze the beta-factor method for three structure diagrams of dependability, research the effects of the model parameters on the final result, find the limitations of beta-factor model applicability. Methods. The calculations were performed using numerical methods of solution of equations, analytical methods of function studies. Conclusions. The paper features an in-depth study of the method of undependability calculation for three structure diagrams that accounts for CCF and uses the beta-factor model. In the first example, for the selected structure diagram out of n parallel elements with identical dependability, it is analytically shown that accounting for CCF does not necessarily cause increased undependability. In the second example of primary junction of n elements with identical dependability, it is shown that accounting for CCF subject to parameter values causes both increased and decreased undependability. A number of beta factor model parameter values was identified that cause unacceptable values of system failure probability. These sets of values correspond to relatively high model parameter values and are hardly practically attainable as part of engineering of real systems with highly dependable components. In the third example, the conventional bridge diagram with two groups of CCFs is considered. The complex ambivalent effect of beta factor model parameters on the probability of failure is shown. As in the second example, limitations of the applicability of the beta-factor model are identified.


2018 ◽  
Vol 20 (1) ◽  
pp. 33
Author(s):  
A. Mirzayeva ◽  
N.A. Slavinskaya ◽  
M. Abbasi ◽  
J.H. Starcke ◽  
W. Li ◽  
...  

A module of PrIMe automated data-centric infrastructure, Bound-to-Bound Data Collaboration (B2BDC), was used for the analysis of systematic uncertainty and data consistency of the H2/CO reaction model (73/17). In order to achieve this purpose, a dataset of 167 experimental targets (ignition delay time and laminar flame speed) and 55 active model parameters (pre-exponent factors in the Arrhenius form of the reaction rate coefficients) was constructed. Consistency analysis of experimental data from the composed dataset revealed disagreement between models and data. Two consistency measures were applied to identify the quality of experimental targets (Quantities of Interest, QoI): scalar consistency measure, which quantifies the tightening index of the constraints while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds; and a newly-developed method of computing the vector consistency measure (VCM), which determines the minimal bound changes for QoIs initially identified as inconsistent, each bound by its own extent, while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds. The consistency analysis suggested that elimination of 45 experimental targets, 8 of which were self- inconsistent, would lead to a consistent dataset. After that the feasible parameter set was constructed through decrease uncertainty parameters for several reaction rate coefficients. This dataset was subjected for the B2BDC framework model optimization and analysis on. Forth methods of parameter optimization were applied, including those unique in the B2BDC framework. The optimized models showed improved agreement with experimental values, as compared to the initially-assembled model. Moreover, predictions for experiments not included in the initial dataset were investigated. The results demonstrate benefits of applying the B2BDC methodology for development of predictive kinetic models.


Sign in / Sign up

Export Citation Format

Share Document