scholarly journals Simulating Contact Using the Elastic Foundation Algorithm in OpenSim

2018 ◽  
Author(s):  
Michael W. Hast ◽  
Brett G. Hanson ◽  
Josh R. Baxter

AbstractModeling joint contact is necessary to test many questions using simulation paradigms, but this portion of OpenSim is not well understood. The purpose of this study was to provide a guide for implementing a validated elastic foundation contact model in OpenSim. First, the load-displacement properties of a stainless steel ball bearing and ultra high molecular weight polyethylene (UHMWPE) slab were recorded during a controlled physical experiment. These geometries were imported and into OpenSim and contact mechanics were modeled with the on-board elastic foundation algorithm. Particle swarm optimization was performed to determine the elastic foundation model stiffness (2.14×1011 ± 6.81×109 N/m) and dissipation constants (0.999 ± 0.003). Estimations of contact forces compared favorably with blinded experimental data (root mean square error: 87.58 ± 1.57 N). Last, total knee replacement geometry was used to perform a sensitivity analysis of material stiffness and mesh density with regard to penetration depth and computational time. These simulations demonstrated that material stiffnesses between 1011 and 1012 N/m resulted in realistic penetrations (< 0.15mm) when subjected to 981N loads. Material stiffnesses between 1013 and 1015 N/m increased computation time by factors of 12–23. This study shows the utility of performing a simple physical experiment to tune model parameters when physical components of orthopaedic implants are not available to the researcher. It also demonstrates the efficacy of employing the on-board elastic foundation algorithm to create realistic simulations of contact between orthopaedic implants.

Author(s):  
Aalok Trivedi ◽  
Dereje Agonafer ◽  
Deepak Sivanandan ◽  
Mark Hendrix ◽  
Akbar Sahrapour

Computational Fluid Dynamics (CFD) is widely used in the telecommunication industry to validate experimental data and obtain both qualitative and quantitative results during product development. A typical outdoor telecommunications cabinet requires the modeling of a large number of components in order to perform the required air flow and thermal design. Among these components, the heat exchanger is the most critical to thermal performance. The cabinet heat exchanger and other thermal components make up a complex thermal system. This thermal system must be characterized and optimized in a short time frame to support time-to-market requirements. CFD techniques allow for completing system thermal optimization long before product test data can be available. However, the computational model of the complex thermal system leads to a large mesh count and corresponding lengthy computational times. The objective of this paper is to present an overview of techniques to minimize the computational time for complex designs such as a heat exchanger used in telecommunication cabinets. The discussion herein presents the concepts which lead to developing a compact model of the heat exchanger, reducing the mesh count and thereby the computation time, without compromising the acceptability of the results. The model can be further simplified by identifying the components significantly affecting the physics of the problem and eliminating components that will not adversely affect either the fluid mechanics or heat transfer. This will further reduce the mesh density. Compact modeling, selective meshing, and replacing sub-components with simplified equivalent models all help reduce the overall model size. The model thus developed is compared to a benchmark case without the compact model. Given that the validity of compact models is not generalized, it is expected that this methodology can address this particular class of problems in telecommunications systems. The CFD code FLOTHERM™ by Flomerics is used to carry out the analysis.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e2960 ◽  
Author(s):  
Ross H. Miller ◽  
Rebecca L. Krupenevich ◽  
Alison L. Pruziner ◽  
Erik J. Wolf ◽  
Barri L. Schnall

BackgroundIndividuals with unilateral lower limb amputation have a high risk of developing knee osteoarthritis (OA) in their intact limb as they age. This risk may be related to joint loading experienced earlier in life. We hypothesized that loading during walking would be greater in the intact limb of young US military service members with limb loss than in controls with no limb loss.MethodsCross-sectional instrumented gait analysis at self-selected walking speeds with a limb loss group (N = 10, age 27 ± 5 years, 170 ± 36 days since last surgery) including five service members with transtibial limb loss and five with transfemoral limb loss, all walking independently with their first prosthesis for approximately two months. Controls (N = 10, age 30 ± 4 years) were service members with no overt demographical risk factors for knee OA. 3D inverse dynamics modeling was performed to calculate joint moments and medial knee joint contact forces (JCF) were calculated using a reduction-based musculoskeletal modeling method and expressed relative to body weight (BW).ResultsPeak JCF and maximum JCF loading rate were significantly greater in limb loss (184% BW, 2,469% BW/s) vs. controls (157% BW, 1,985% BW/s), with large effect sizes. Results were robust to probabilistic perturbations to the knee model parameters.DiscussionAssuming these data are reflective of joint loading experienced in daily life, they support a “mechanical overloading” hypothesis for the risk of developing knee OA in the intact limb of limb loss subjects. Examination of the evolution of gait mechanics, joint loading, and joint health over time, as well as interventions to reduce load or strengthen the ability of the joint to withstand loads, is warranted.


2021 ◽  
Author(s):  
Mikhail Sviridov ◽  
◽  
Anton Mosin ◽  
Sergey Lebedev ◽  
Ron Thompson ◽  
...  

While proactive geosteering, special inversion algorithms are used to process the readings of logging-while-drilling resistivity tools in real-time and provide oil field operators with formation models to make informed steering decisions. Currently, there is no industry standard for inversion deliverables and corresponding quality indicators because major tool vendors develop their own device-specific algorithms and use them internally. This paper presents the first implementation of vendor-neutral inversion approach applicable for any induction resistivity tool and enabling operators to standardize the efficiency of various geosteering services. The necessity of such universal inversion approach was inspired by the activity of LWD Deep Azimuthal Resistivity Services Standardization Workgroup initiated by SPWLA Resistivity Special Interest Group in 2016. Proposed inversion algorithm utilizes a 1D layer-cake formation model and is performed interval-by-interval. The following model parameters can be determined: horizontal and vertical resistivities of each layer, positions of layer boundaries, and formation dip. The inversion can support arbitrary deep azimuthal induction resistivity tool with coaxial, tilted, or orthogonal transmitting and receiving antennas. The inversion is purely data-driven; it works in automatic mode and provides fully unbiased results obtained from tool readings only. The algorithm is based on statistical reversible-jump Markov chain Monte Carlo method that does not require any predefined assumptions about the formation structure and enables searching of models explaining the data even if the number of layers in the model is unknown. To globalize search, the algorithm runs several Markov chains capable of exchanging their states between one another to move from the vicinity of local minimum to more perspective domain of model parameter space. While execution, the inversion keeps all models it is dealing with to estimate the resolution accuracy of formation parameters and generate several quality indicators. Eventually, these indicators are delivered together with recovered resistivity models to help operators with the evaluation of inversion results reliability. To ensure high performance of the inversion, a fast and accurate semi-analytical forward solver is employed to compute required responses of a tool with specific geometry and their derivatives with respect to any parameter of multi-layered model. Moreover, the reliance on the simultaneous evolution of multiple Markov chains makes the algorithm suitable for parallel execution that significantly decreases the computational time. Application of the proposed inversion is shown on a series of synthetic examples and field case studies such as navigating the well along the reservoir roof or near the oil-water-contact in oil sands. Inversion results for all scenarios confirm that the proposed algorithm can successfully evaluate formation model complexity, recover model parameters, and quantify their uncertainty within a reasonable computational time. Presented vendor-neutral stochastic approach to data processing leads to the standardization of the inversion output including the resistivity model and its quality indicators that helps operators to better understand capabilities of tools from different vendors and eventually make more confident geosteering decisions.


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Secure and efficient authentication mechanism becomes a major concern in cloud computing due to the data sharing among cloud server and user through internet. This paper proposed an efficient Hashing, Encryption and Chebyshev HEC-based authentication in order to provide security among data communication. With the formal and the informal security analysis, it has been demonstrated that the proposed HEC-based authentication approach provides data security more efficiently in cloud. The proposed approach amplifies the security issues and ensures the privacy and data security to the cloud user. Moreover, the proposed HEC-based authentication approach makes the system more robust and secured and has been verified with multiple scenarios. However, the proposed authentication approach requires less computational time and memory than the existing authentication techniques. The performance revealed by the proposed HEC-based authentication approach is measured in terms of computation time and memory as 26ms, and 1878bytes for 100Kb data size, respectively.


Author(s):  
Francesco Braghin ◽  
Federico Cheli ◽  
Edoardo Sabbioni

Individual tire model parameters are traditionally derived from expensive component indoor laboratory tests as a result of an identification procedure minimizing the error with respect to force and slip measurements. These parameters are then transferred to vehicle models used at a design stage to simulate the vehicle handling behavior. A methodology aimed at identifying the Magic Formula-Tyre (MF-Tyre) model coefficients of each individual tire for pure cornering conditions based only on the measurements carried out on board vehicle (vehicle sideslip angle, yaw rate, lateral acceleration, speed and steer angle) during standard handling maneuvers (step-steers) is instead presented in this paper. The resulting tire model thus includes vertical load dependency and implicitly compensates for suspension geometry and compliance (i.e., scaling factors are included into the identified MF coefficients). The global number of tests (indoor and outdoor) needed for characterizing a tire for handling simulation purposes can thus be reduced. The proposed methodology is made in three subsequent steps. During the first phase, the average MF coefficients of the tires of an axle and the relaxation lengths are identified through an extended Kalman filter. Then the vertical loads and the slip angles at each tire are estimated. The results of these two steps are used as inputs to the last phase, where, the MF-Tyre model coefficients for each individual tire are identified through a constrained minimization approach. Results of the identification procedure have been compared with experimental data collected on a sport vehicle equipped with different tires for the front and the rear axles and instrumented with dynamometric hubs for tire contact forces measurement. Thus, a direct matching between the measured and the estimated contact forces could be performed, showing a successful tire model identification. As a further verification of the obtained results, the identified tire model has also been compared with laboratory tests on the same tire. A good agreement has been observed for the rear tire where suspension compliance is negligible, while front tire data are comparable only after including a suspension compliance compensation term into the identification procedure.


Spine ◽  
2008 ◽  
Vol 33 (1) ◽  
pp. 19-26 ◽  
Author(s):  
Christina A. Niosi ◽  
Derek C. Wilson ◽  
Qingan Zhu ◽  
Ory Keynan ◽  
David R. Wilson ◽  
...  

2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. V1-V9 ◽  
Author(s):  
Zhonghuan Chen ◽  
Sergey Fomel ◽  
Wenkai Lu

When plane-wave destruction (PWD) is implemented by implicit finite differences, the local slope is estimated by an iterative algorithm. We propose an analytical estimator of the local slope that is based on convergence analysis of the iterative algorithm. Using the analytical estimator, we design a noniterative method to estimate slopes by a three-point PWD filter. Compared with the iterative estimation, the proposed method needs only one regularization step, which reduces computation time significantly. With directional decoupling of the plane-wave filter, the proposed algorithm is also applicable to 3D slope estimation. We present synthetic and field experiments to demonstrate that the proposed algorithm can yield a correct estimation result with shorter computational time.


Author(s):  
Yi Zhu ◽  
Evgueni T. Filipov

Origami-inspired structures provide novel solutions to many engineering applications. The presence of self-contact within origami patterns has been difficult to simulate, yet it has significant implications for the foldability, kinematics and resulting mechanical properties of the final origami system. To open up the full potential of origami engineering, this paper presents an efficient numerical approach that simulates the panel contact in a generalized origami framework. The proposed panel contact model is based on the principle of stationary potential energy and assumes that the contact forces are conserved. The contact potential is formulated such that both the internal force vector and the stiffness matrix approach infinity as the distance between the contacting panel and node approaches zero. We use benchmark simulations to show that the model can correctly capture the kinematics and mechanics induced by contact. By tuning the model parameters accordingly, this methodology can simulate the thickness in origami. Practical examples are used to demonstrate the validity, efficiency and the broad applicability of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document