The Influence of Muscle Model Complexity in Musculoskeletal Motion Modeling

1985 ◽  
Vol 107 (2) ◽  
pp. 147-157 ◽  
Author(s):  
M. L. Audu ◽  
D. T. Davy

A comparative study of four different muscle models in a musculoskeletal motion problem is made. The models vary in complexity from the simple input-output model to the more complex model of Hatze [I]. These models are used to solve a minimum time kicking problem using an optimal control algorithm. The results demonstrate the strong influence of the model choice on the various predicted kinematic and kinetic parameters in the problem. The study illustrates some of the advantages and disadvantages involved in trade-offs between model complexity and practicability in musculoskeletal motion studies. The results also illustrate the importance of appropriate detailed parameter estimation studies in the mathematical modeling of the musculoskeletal system.

2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


2017 ◽  
Vol 21 (1-2) ◽  
pp. 152-175
Author(s):  
Lila Wade

Financing mechanisms are central to the operational efficacy of peace operations, yet current analysis of peacebuilding finance is atomistic, focusing on one domain, such as coordination or financing. To address the need for deeper understanding of how financing modalities affect peacebuilding outcomes, this paper identifies the trade-offs and opportunities of different financing schema across the lifespan of a peace operation. In order to parse the linkages between financing and outcomes, this paper examines: (1) control of donor funds within a transitional state; (2) budgeting for coordination and alignment; (3) promoting partnerships and participation through funding modalities; and (4) funding ‘quick impact’ projects to bridge the periods of immediate relief and long-term development. With reference to peacebuilding operations in Liberia after the 2003 Accra Comprehensive Peace Agreement, this analysis highlights numerous innovations and experiments in the financing of peace operations, examining the advantages and disadvantages inherent in different approaches.


2018 ◽  
Vol 11 (1) ◽  
pp. 85-100 ◽  
Author(s):  
Ellen Taylor ◽  
Alan J. Card ◽  
Melissa Piatkowski

Aim: Our review evaluated both the effects of single-occupancy patient rooms (SPRs) on patient outcomes for hospitalized adults and user opinion related to SPRs. Background: In 2006, a requirement for SPRs in hospitals was instituted in the United States. This systematic literature review evaluates research published since that time to evaluate the impact of SPRs. Methods: The review adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. Databases searched included MEDLINE, CINAHL, and Scopus. Supplemental searches were performed. We included studies reporting patient outcomes or user opinion related to SPRs. Appraisal was conducted using a dual appraisal system of evidence levels and methodological quality. Results: Forty-three studies qualified for appraisal. Three were excluded due to methodological quality (no appraisal score). One study was appraised for three individual outcomes (i.e., falls, infections, and user opinion). Eleven studies with low methodological quality scores were not included in the narrative synthesis. Overall, 87% of studies reported advantages associated with SPRs (some a combination of advantages and disadvantages or a combination of advantages and neutral results). Outcomes with the best evidence of benefit include communication, infection control, noise reduction/perceived sleep quality, and preference/perception. Conclusion: SPRs seem to result in more advantages than disadvantages. However, healthcare is a complex adaptive system, and decisions for 100% SPRs should be reviewed alongside related issues, such as necessary workflow modifications, unit configuration and other room layout decisions, patient populations, staffing models, and inherent trade-offs (e.g., the advantages of privacy compared to disadvantage of isolation).


Author(s):  
Daniele Bortoluzzi ◽  
Francesco Biral ◽  
Enrico Bertolazzi ◽  
Paolo Bosetti ◽  
Fabrizio Zendri

In this paper the effectiveness of an optimal reference manoeuvre is analysed w.r.t. the complexity of the vehicle model used within the optimal control algorithm. The optimal reference manoeuvre is computed by means of a Nonlinear Receding Horizon planning (NRHP) strategy which is based on a simplified vehicle model. The reference manoeuvre is tracked by a controller implemented on a low level faster loop. The system is able to perform autonomously lane change and obstacle avoidance manoeuvres by tracking the computed reference one. The quality of the performed manoeuvres depends on the reference manoeuvre and consequently on the vehicle model used by the NRHP. For manoeuvres with low or mild lateral accelerations reduced order models might yield realistic and reliable reference manoeuvres. However, critical conditions (e.g. evasive manoeuvre) require a manoeuvre planner able to catch highly non-linear vehicle dynamics that characterizes such situations. On the other hand, being the NRHP computational cost generally high and related to the number of equations of the mathematical model, a trade-off between computational efficiency and model complexity is required. The work analyses the reference manoeuvres produced by two vehicle models of increasing complexity used as reference within the NRHP. Optimal planner performance evaluation on evasive manoeuvre in critical conditions will be presented with simulations results.


2012 ◽  
Vol 16 (3) ◽  
pp. 254-276 ◽  
Author(s):  
Fernando A. F. Ferreira ◽  
Ronald W. Spahr ◽  
Sérgio P. Santos ◽  
Paulo M. M. Rodrigues

Remarkable progress has occurred over the years in the performance evaluation of bank branches. Even though financial measures are usually considered the most important in assessing branch viability, we posit that insufficient attention has been given to other factors that affect the branches’ potential profitability and attractiveness, such as: location features, trade area characteristics and facilities management. Based on the integrated use of cognitive maps and multiple criteria decision analysis, we propose a framework that adds value to the way that potential attractiveness criteria to assess bank branches are selected and to the way that the trade-offs among those criteria are obtained. This framework is the result of a process involving several directors from the five largest banks operating in Portugal, and follows a constructivist approach. Our findings suggest that the use of cognitive maps systematically identifies previously omitted criteria that may assess potential attractiveness. The use of multiple criteria techniques clarify and add transparency to the way trade-offs are dealt with. Advantages and disadvantages of the proposed framework are also discussed.


2005 ◽  
Vol 128 (1) ◽  
pp. 14-25 ◽  
Author(s):  
H. Kazerooni ◽  
R. Steger

The first functional load-carrying and energetically autonomous exoskeleton was demonstrated at the University of California, Berkeley, walking at the average speed of 1.3m∕s(2.9mph) while carrying a 34kg(75lb) payload. Four fundamental technologies associated with the Berkeley lower extremity exoskeleton were tackled during the course of this project. These four core technologies include the design of the exoskeleton architecture, control schemes, a body local area network to host the control algorithm, and a series of on-board power units to power the actuators, sensors, and the computers. This paper gives an overview of one of the control schemes. The analysis here is an extension of the classical definition of the sensitivity function of a system: the ability of a system to reject disturbances or the measure of system robustness. The control algorithm developed here increases the closed-loop system sensitivity to its wearer’s forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The control method has little robustness to parameter variations and therefore requires a relatively good dynamic model of the system. The trade-offs between having sensors to measure human variables and the lack of robustness to parameter variation are described.


2000 ◽  
Vol 1699 (1) ◽  
pp. 101-106 ◽  
Author(s):  
A. Raja Shekharan

Pavement deterioration models are indispensable for many purposes; as a result, a number of models are in use. Models with simple equation forms are easier to use, but frequently such models may not suffice for many purposes. Consequently, complex nonlinear forms of models are to be considered. However, determination of the solution to a complex model form is not an easy task. There are various methods of obtaining solutions to such models, with each method having its own advantages and disadvantages. The use of genetic algorithms for model development is examined in this study. A very brief description of genetic algorithms is included, and their application for the development of a model is illustrated. Five models of varied complexities, extracted from the literature, are employed to create databases in which the relationship between the response and the predictor variables is known. The solutions to the models are developed employing genetic algorithms. The results indicate a high degree of accuracy, which suggests that genetic algorithms are useful as a tool for development of solutions to pavement deterioration models.


2013 ◽  
Vol 5 (4) ◽  
pp. 237-241
Author(s):  
Henry De-Graft Acquah

This paper introduces and applies the bootstrap method to compare the power of the test for asymmetry in the Granger and Lee (1989) and Von Cramon-Taubadel and Loy (1996) models. The results of the bootstrap simulations indicate that the power of the test for asymmetry depends on various conditions such as the bootstrap sample size, model complexity, difference in adjustment speeds and the amount of noise in the data generating process used in the application. The true model achieves greater power when compared with the complex model. With small bootstrap sample size or large noise, both models display low power in rejecting the (false) null hypothesis of symmetry.


2015 ◽  
Vol 12 (4) ◽  
pp. 3945-4004 ◽  
Author(s):  
S. Pande ◽  
L. Arkesteijn ◽  
H. Savenije ◽  
L. A. Bastidas

Abstract. This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.


Sign in / Sign up

Export Citation Format

Share Document