scholarly journals High Fidelity Microsurgical Simulation: The Thiel Model and Evaluation Instrument

2018 ◽  
Vol 27 (2) ◽  
pp. 118-124 ◽  
Author(s):  
Andrei Odobescu ◽  
Isak Goodwin ◽  
Djamal Berbiche ◽  
Joseph BouMerhi ◽  
Patrick G. Harris ◽  
...  

Background: The Thiel embalmment method has recently been used in a number of medical simulation fields. The authors investigate the use of Thiel vessels as a high fidelity model for microvascular simulation and propose a new checklist-based evaluation instrument for microsurgical training. Methods: Thirteen residents and 2 attending microsurgeons performed video recorded microvascular anastomoses on Thiel embalmed arteries that were evaluated using a new evaluation instrument (Microvascular Evaluation Scale) by 4 fellowship trained microsurgeons. The internal validity was assessed using the Cronbach coefficient. The external validity was verified using regression models. Results: The reliability assessment revealed an excellent intra-class correlation of 0.89. When comparing scores obtained by participants from different levels of training, attending surgeons and senior residents (Post Graduate Year [PGY] 4-5) scored significantly better than junior residents (PGY 1-3). The difference between senior residents and attending surgeons was not significant. When considering microsurgical experience, the differences were significant between the advanced group and the minimal and moderate experience groups. The differences between minimal and moderate experience groups were not significant. Based on the data obtained, a score of 8 would translate into a level of microsurgical competence appropriate for clinical microsurgery. Conclusions: Thiel cadaveric vessels are a high fidelity model for microsurgical simulation. Excellent internal and external validity measures were obtained using the Microvascular Evaluation Scale (MVES).

Trials ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Samina Ali ◽  
◽  
Gareth Hopkin ◽  
Naveen Poonai ◽  
Lawrence Richer ◽  
...  

Abstract Background Patients and their families often have preferences for medical care that relate to wider considerations beyond the clinical effectiveness of the proposed interventions. Traditionally, these preferences have not been adequately considered in research. Research questions where patients and families have strong preferences may not be appropriate for traditional randomized controlled trials (RCTs) due to threats to internal and external validity, as there may be high levels of drop-out and non-adherence or recruitment of a sample that is not representative of the treatment population. Several preference-informed designs have been developed to address problems with traditional RCTs, but these designs have their own limitations and may not be suitable for many research questions where strong preferences and opinions are present. Methods In this paper, we propose a novel and innovative preference-informed complementary trial (PICT) design which addresses key weaknesses with both traditional RCTs and available preference-informed designs. In the PICT design, complementary trials would be operated within a single study, and patients and/or families would be given the opportunity to choose between a trial with all treatment options available and a trial with treatment options that exclude the option which is subject to strong preferences. This approach would allow those with strong preferences to take part in research and would improve external validity through recruiting more representative populations and internal validity. Here we discuss the strengths and limitations of the PICT design and considerations for analysis and present a motivating example for the design based on the use of opioids for pain management for children with musculoskeletal injuries. Conclusions PICTs provide a novel and innovative design for clinical trials with more than two arms, which can address problems with existing preference-informed trial designs and enhance the ability of researchers to reflect shared decision-making in research as well as improving the validity of trials of topics with strong preferences.


2020 ◽  
Vol 27 (6) ◽  
pp. 946-956 ◽  
Author(s):  
Yilin Yoshida ◽  
Sonal J Patil ◽  
Ross C Brownson ◽  
Suzanne A Boren ◽  
Min Kim ◽  
...  

Abstract Objective We evaluated the extent to which studies that tested short message service (SMS)– and application (app)-based interventions for diabetes self-management education and support (DSMES) report on factors that inform both internal and external validity as measured by the RE-AIM (Reach, Efficacy/Effectiveness, Adoption, Implementation, and Maintenance) framework. Materials and Methods We systematically searched PubMed, Embase, Web of Science, CINAHL (Cumulative Index of Nursing and Allied Health Literature), and IEEE Xplore Digital Library for articles from January 1, 2009, to February 28, 2019. We carried out a multistage screening process followed by email communications with study authors for missing or discrepant information. Two independent coders coded eligible articles using a 23-item validated data extraction tool based on the RE-AIM framework. Results Twenty studies (21 articles) were included in the analysis. The comprehensiveness of reporting on the RE-AIM criteria across the SMS- and app-based DSMES studies was low. With respect to internal validity, most interventions were well described and primary clinical or behavioral outcomes were measured and reported. However, gaps exist in areas of attrition, measures of potential negative outcomes, the extent to which the protocol was delivered as intended, and description on delivery agents. Likewise, we found limited information on external validity indicators across adoption, implementation, and maintenance domains. Conclusions Reporting gaps were found in internal validity but more so in external validity in the current SMS- and app-based DSMES literature. Because most studies in this review were efficacy studies, the generalizability of these interventions cannot be determined. Future research should adopt the RE-AIM dimensions to improve the quality of reporting and enhance the likelihood of translating research to practice.


2021 ◽  
pp. 40-61
Author(s):  
James Wilson

A particular approach to ethical reasoning has come to dominate much Anglo-American philosophy, one which assumes that the most rigorous method is to proceed by analysis of thought experiments. In thought experiments, features such as context and history are stripped away, and all factors other than those of ethical interest are stipulated to be equal. This chapter argues that even if a thought experiment produces results that are internally valid—in that it provides a genuine ethical insight about the highly controlled and simplified experimental scenario under discussion—this does not imply external validity. Just as in empirical experiments, there is a yawning gap between succeeding in the relatively easy project of establishing internal validity in a controlled and simplified context, and the more difficult one of establishing external validity in the messier and more complex real world.


2003 ◽  
Vol 33 (2) ◽  
pp. 351-356 ◽  
Author(s):  
L. R. OLSEN ◽  
D. V. JENSEN ◽  
V. NOERHOLM ◽  
K. MARTINY ◽  
P. BECH

Background. We have developed the Major Depression Inventory (MDI), consisting of 10 items, covering the DSM-IV as well as the ICD-10 symptoms of depressive illness. We aimed to evaluate this as a scale measuring severity of depressive states with reference to both internal and external validity.Method. Patients representing the score range from no depression to marked depression on the Hamilton Depression Scale (HAM-D) completed the MDI. Both classical and modern psychometric methods were applied for the evaluation of validity, including the Rasch analysis.Results. In total, 91 patients were included. The results showed that the MDI had an adequate internal validity in being a unidimensional scale (the total score an appropriate or sufficient statistic). The external validity of the MDI was also confirmed as the total score of the MDI correlated significantly with the HAM-D (Pearson's coefficient 0·86, P[les ]0·01, Spearman 0·80, P[les ]0·01).Conclusion. When used in a sample of patients with different states of depression the MDI has an adequate internal and external validity.


Author(s):  
Rajeev Dehejia

AbstractThis paper surveys six widely-used non-experimental methods for estimating treatment effects (instrumental variables, regression discontinuity, direct matching, propensity score matching, linear regression and non-parametric methods, and difference-in-differences), and assesses their internal and external validity relative both to each other and to randomized controlled trials. While randomized controlled trials can achieve the highest degree of internal validity when cleanly implemented in the field, the availability of large, nationally representative data sets offers the opportunity for a high degree of external validity using non-experimental methods. We argue that each method has merits in some context and they are complements rather than substitutes.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 822-822
Author(s):  
Elizabeth Rose Mayeda ◽  
Eleanor Hayes-Larson ◽  
Hailey Banack

Abstract Selection bias presents a major threat to both internal and external validity in aging research. “Selection bias” refers to sample selection processes that lead to statistical associations in the study sample that are biased estimates of causal effects in the population of interest. These processes can lead to: (1) results that do not generalize to the population of interest (threat to external validity) or (2) biased effect estimates (associations that do not represent causal effects for any population, including the people in the sample; a threat to internal validity). In this presentation, we give an overview of selection bias in aging research. We will describe processes that can give rise to selection bias, highlight why they are particularly pervasive in this field, and present several examples of selection bias in aging research. We end with a brief summary of strategies to prevent and correct for selection bias in aging research.


2022 ◽  
Vol 7 (01) ◽  
pp. 31-51
Author(s):  
Tanya Peart ◽  
Nicolas Aubin ◽  
Stefano Nava ◽  
John Cater ◽  
Stuart Norris

Velocity Prediction Programs (VPPs) are commonly used to help predict and compare the performance of different sail designs. A VPP requires an aerodynamic input force matrix which can be computationally expensive to calculate, limiting its application in industrial sail design projects. The use of multi-fidelity kriging surrogate models has previously been presented by the authors to reduce this cost, with high-fidelity data for a new sail being modelled and the low-fidelity data provided by data from existing, but different, sail designs. The difference in fidelity is not due to the simulation method used to obtain the data, but instead how similar the sail’s geometry is to the new sail design. An important consideration for the construction of these models is the choice of low-fidelity data points, which provide information about the trend of the model curve between the high-fidelity data. A method is required to select the best existing sail design to use for the low-fidelity data when constructing a multi-fidelity model. The suitability of an existing sail design as a low fidelity model could be evaluated based on the similarity of its geometric parameters with the new sail. It is shown here that for upwind jib sails, the similarity of the broadseam between the two sails best indicates the ability of a design to be used as low-fidelity data for a lift coefficient surrogate model. The lift coefficient surrogate model error predicted by the regression is shown to be close to 1% of the lift coefficient surrogate error for most points. Larger discrepancies are observed for a drag coefficient surrogate error regression.


1983 ◽  
Vol 27 (13) ◽  
pp. 1058-1062
Author(s):  
Brian W. Surgenor ◽  
John D. McGeachy

The application of a part-task nuclear simulator to the measurement of performance in the task of fault management was studied. Specifically, the design of the simulator was evaluated for internal and external validity. External validation required confirmation that the task presented by the simulator had the essential elements of the real task. In the context of this particular study, internal validation required confirmation that performance on the simulator represented an accurate and fair measure of a subject's understanding of fundamental principles. The requirements for external and internal validity were found to be in conflict. Performance on the simulator was not an accurate measure of fundamental understanding because the task was realistic. However, it was concluded that a part-task simulator does provide an effective method of gathering information on human performance.


2015 ◽  
Vol 57 (3) ◽  
pp. 237-251 ◽  
Author(s):  
Engin Ozertugrul

In research, the standard view of credibility seeks to illuminate what the researcher did with the data vis-à-vis collection, analysis, and interpretation. This works well in standard research where data can be checked through conventional validity measures (internal validity, external validity, reliability, replicability, and objectivity). It does not work well in heuristic self-search inquiry (HSSI) method where the data are in the researcher. In previous HSSI works, there is a level of uncertainty regarding the use of the method in knowledge exploration. It seems that there is still a need for the development of methodological understanding, particularly in terms of those who favor the use of multiple participants in HSSI, as opposed to those who do not. In this article, I compared four studies to clarify HSSI’s utility in knowledge production for future use.


2003 ◽  
Vol 37 (3) ◽  
pp. 265-269 ◽  
Author(s):  
Roger T. Mulder ◽  
Chris Frampton ◽  
Peter R. Joyce ◽  
Richard Porter

Objective: To discuss the extent to which the results of randomized controlled trials (RCTs) in psychiatry can be generalized to clinical practice. Method: Threats to internal and external validity in psychiatric RCTs are reviewed. Results: Threats to internal validity increase the possibility of bias. Psychiatric RCTs have problems with small samples, arbitrary definitions of caseness, disparate definitions of outcome and high spontaneous recovery rates. Particular issues arise in psychotherapy RCTs. Threats to external validity reduce the extent to which the results of a RCT produce a correct basis for generalization to other circumstances. These include high rates of comorbidity and sub syndromal pathology in normal clinical practice, manual-based treatment protocols and varying definitions of successful treatment. Conclusions: Randomized controlled trials remain the most robust design to investigate the effectiveness of treatments. They should be applied to important clinical questions; and carried out, as far as possible, with typical patients in the clinical conditions in which the treatment is likely to be used.


Sign in / Sign up

Export Citation Format

Share Document