scholarly journals Improving handover competency in preclinical medical and health professions students: establishing the reliability and construct validity of an assessment instrument

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Meghan Michael ◽  
Andrew C. Griggs ◽  
Ian H. Shields ◽  
Mozhdeh Sadighi ◽  
Jessica Hernandez ◽  
...  

Abstract Background As part of the worldwide call to enhance the safety of patient handovers of care, the Association of American Medical Colleges (AAMC) requires that all graduating students “give or receive a patient handover to transition care responsibly” as one of its Core Entrustable Professional Activities (EPAs) for Entering Residency. Students therefore require educational activities that build the necessary teamwork skills to perform structured handovers. To date, a reliable instrument designed to assess teamwork competencies, like structured communication, throughout their preclinical and clinical years does not exist. Method Our team developed an assessment instrument that evaluates both the use of structured communication and two additional teamwork competencies necessary to perform safe patient handovers. This instrument was utilized to assess 192 handovers that were recorded from a sample of 229 preclinical medical students and 25 health professions students who participated in a virtual course on safe patient handovers. Five raters were trained on utilization of the assessment instrument, and consensus was established. Each handover was reviewed independently by two separate raters. Results The raters achieved 72.22 % agreement across items in the reviewed handovers. Krippendorff’s alpha coefficient to assess inter-rater reliability was 0.6245, indicating substantial agreement among the raters. A confirmatory factor analysis (CFA) demonstrated the orthogonal characteristics of items in this instrument with rotated item loadings onto three distinct factors providing preliminary evidence of construct validity. Conclusions We present an assessment instrument with substantial reliability and preliminary evidence of construct validity designed to evaluate both use of structured handover format as well as two team competencies necessary for safe patient handovers. Our assessment instrument can be used by educators to evaluate learners’ handoff performance as early as their preclinical years and is broadly applicable in the clinical context in which it is utilized. In the journey to optimize safe patient care through improved teamwork during handovers, our instrument achieves a critical step in the process of developing a validated assessment instrument to evaluate learners as they seek to accomplish this goal.

2010 ◽  
Vol 26 (1) ◽  
pp. 19-27 ◽  
Author(s):  
Christophe Maïano ◽  
Alexandre J.S. Morin ◽  
Johana Monthuy-Blanc ◽  
Jean-Marie Garbarino

The main objective of the present series of studies was to test the construct validity (i.e., content, factorial, and convergent validities) of the Fear of Negative Appearance Evaluation Scale (FNAES) in a community sample of French adolescents. A total sample of 683 adolescents was involved in three studies. The factorial validity and the measurement invariance of the FNAES were verified through a series of confirmatory factor analyses. The convergent validity of the FNAES was then verified through correlational analyses. The first study showed that the content and formulation of the French FNAES items were adequate for children and adolescents. The following two studies (Studies 2 to 3) provided (a) support for the factor validity, reliability, and convergent validity of a five-item French version of the FNAES, and (b) partial support for the measurement invariance of the resulting FNAES across genders. However, the latent mean structure of the FNAES did not prove to be invariant across genders, revealing a significantly higher latent mean score of FNAES in girls relative to boys. The present results, thus, provide preliminary evidence regarding the construct validity of the FNAES in a community sample of French adolescents. Recommendations for future practice and research regarding this instrument are outlined.


BMJ Open ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. e034517
Author(s):  
Julie A Wright ◽  
Suzanne G Leveille ◽  
Hannah Chimowitz ◽  
Alan Fossa ◽  
Rebecca Stametz ◽  
...  

ObjectivesTo develop and evaluate the validity of a scale to assess patients’ perceived benefits and risks of reading ambulatory visit notes online (open notes).DesignFour studies were used to evaluate the construct validity of a benefits and risks scale. Study 1 refined the items; study 2 evaluated underlying factor structure and identified the items; study 3 evaluated study 2 results in a separate sample; and study 4 examined factorial invariance of the developed scale across educational subsamples.SettingAmbulatory care in three large health systems in the USA.ParticipantsParticipants in three US health systems who responded to one of two online surveys asking about benefits and risks of reading visit notes: a psychometrics survey of primary care patients, and a large general survey of patients across all ambulatory specialties. Sample sizes: n=439 (study 1); n=439 (study 2); n=500 (study 3); and n=250 (study 4).Primary and secondary outcome measuresQuestionnaire items about patients’ perceived benefits and risks of reading online visit notes.ResultsStudy 1 resulted in the selection of a 10-point importance response option format over a 4-point agreement scale. Exploratory factor analysis (EFA) in study 2 resulted in two-factor solution: a four-item benefits factor with good reliability (alpha=0.83) and a three-item risks factor with poor reliability (alpha=0.52). The factor structure was confirmed in study 3, and confirmatory factor analysis of benefit items resulted in an excellent fitting model, X2(2)=2.949; confirmatory factor index=0.998; root mean square error of approximation=0.04 (0.00, 0.142); loadings 0.68−0.86; alpha=0.88. Study 4 supported configural, measurement and structural invariance for the benefits scale across high and low-education patient groups.ConclusionsThe findings suggest that the four-item benefits scale has excellent construct validity and preliminary evidence of generalising across different patient populations. Further scale development is needed to understand perceived risks of reading open notes.


Author(s):  
Kevin Spencer ◽  
Hon Keung Yuen ◽  
Max Darwin ◽  
Gavin Jenkins ◽  
Kimberly Kirklin

Purpose: This study was conducted to describe the development and validation of the Hocus Focus Magic Performance Evaluation Scale (HFMPES), which is used to evaluate the competency of health professions personnel in delivering magic tricks as a therapeutic modality. Methods: A 2-phase validation process was used. Phase I (content validation) involved 16 magician judges who independently rated the relevance of each of the 5 items in the HFMPES and established the veracity of its content. Phase II evaluated the psychometric properties of the HFMPES. This process involved 2 magicians using the HFMPES to independently evaluate 73 occupational therapy graduate students demonstrating 3 magic tricks.Results: The HFMPES achieved an excellent scale-content validity index of 0.99. Exploratory factor analysis of the HFMPES scores revealed 1 distinct factor with alpha coefficients ≥0.8 across the 3 magic tricks. The construct validity of the HFMPES scores was further supported by evidence from a known-groups analysis, in which the Mann–Whitney U-test showed significant difference in HFMPES scores between participants with different levels of experience in delivering the 3 magic tricks. The inter-rater reliability coefficients were ≥0.75 across the 3 magic tricks, indicating that the competency of health professions personnel in delivering the 3 magic tricks could be evaluated precisely.Conclusion: Preliminary evidence supported the content and construct validity of the HFMPES, which was found to have good internal consistency and inter-rater reliability in evaluating health professions personnel’s competency in delivering magic tricks.


2020 ◽  
Vol 8 (2) ◽  
pp. 126-134
Author(s):  
Lian Gafar Otaya ◽  
Badrun Kartowagiran ◽  
Heri Retnawati

This study aims to prove the construct validity of the lesson plan assessment instrument in primary schools. In addition, the purpose of this study is to estimate the reliability of lesson plan instruments in primary schools. This research uses a descriptive quantitative approach that is carried out on professional teacher education students at Universitas Negeri Gorontalo, Universitas Negeri Yogyakarta and Universitas Islam Negeri Makassar. The subjects of this study were 516 randomly selected students. Data collection is done through documentation of the results of assessments from the field supervisor of each student. The data analysis technique used is confirmatory factor analysis and composites score reliability. The results showed that the lesson plan assessment instrument was measured by 25 items spread over 4 indicators. All items in the lesson plan assessment instrument indicators are construct valid after being tested through confirmatory factor analysis. In addition, the lesson plan assessment instrument in this study was reliable and had a fairly high construct reliability coefficient of 0.92.


2011 ◽  
Vol 27 (1) ◽  
pp. 59-64 ◽  
Author(s):  
Volkmar Höfling ◽  
Helfried Moosbrugger ◽  
Karin Schermelleh-Engel ◽  
Thomas Heidenreich

The 15 items of the Mindful Attention and Awareness Scale (MAAS; Brown & Ryan, 2003 ) are negatively worded and assumed to assess mindfulness. However, there are indications of differences between the original MAAS and a version with the positively rephrased MAAS items (“mirror items”). The present study examines whether the mindfulness facet “mindful attention and awareness” (MAA) can be measured with both positively and negatively worded items if we take method effects due to item wording into account. To this end, the 15 negatively worded items of the MAAS and additionally 13 positively rephrased items were assessed (N = 602). Confirmatory factor analyses (CFA) models with and without regard to method effects were carried out and evaluated by means of model fit. As a result, the positively and negatively worded items should be seen as different methods that influence the construct validity of mindfulness. Furthermore, a modified version of the MAAS (MAAS-Short) with five negatively worded items (taken from the MAAS) and five positively worded items (“mirror items”) was introduced as an alternative to assess MAA. The MAAS-Short appears superior to the original MAAS. The results and the limitations of the present study are discussed.


2019 ◽  
Vol 35 (4) ◽  
pp. 564-576 ◽  
Author(s):  
Tobias Ringeisen ◽  
Sonja Rohrmann ◽  
Anika Bürgermeister ◽  
Ana N. Tibubos

Abstract. By means of two studies, a self-report measure to assess self-efficacy in presentation and moderation skills, the SEPM scales, was validated. In study 1, factorial and construct validity were examined. A sample of 744 university students (41% females; more than 50% between 20 and 25 years) completed newly constructed self-efficacy items. Confirmatory factor analyses (CFAs) substantiated two positively correlated factors, presentation (SEPM-P) and moderation self-efficacy (SEPM-M). Each factor consists of eight items. The correlation patterns between the two SEPM subscales and related constructs such as extraversion, the preference for cooperative learning, and conflict management indicated adequate construct validity. In study 2, criterion validity was determined by means of latent change modeling. One hundred sixty students ( Mage = 24.40, SD = 4.04; 61% females) took part in a university course to foster key competences and completed the SEPM scales at the beginning and the end of the semester. Presentation and moderation self-efficacy increased significantly over time of which the latter was positively associated with the performance in a practical moderation exam. Across both studies, reliability of the scales was high, ranging from McDonald’s ω .80 to .88.


Author(s):  
Yannik Terhorst ◽  
Paula Philippi ◽  
Lasse Sander ◽  
Dana Schultchen ◽  
Sarah Paganini ◽  
...  

BACKGROUND Mobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Consequently, instruments of high psychometric quality for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA in various health domains. Only few validation studies investigating its psychometric quality exist with selected samples of MHAs. No study has evaluated the construct validity of the MARS and concurrent validity to other instruments. OBJECTIVE This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS. METHODS MARS scoring data was pooled from 15 international app quality reviews to evaluate the psychometric properties of the MARS. The MARS measures app quality across four dimensions: engagement, functionality, aesthetics and information quality. App quality is determined for each dimension and overall. Construct validity was evaluated by assessing related competing confirmatory models that were explored by confirmatory factor analysis (CFA). A combination of non-centrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices was used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations between the MARS and 1) another quality assessment tool called ENLIGHT, and 2) user star-rating extracted from app stores were investigated. Reliability was determined using Omega. Objectivity was assessed in terms of intra-class correlation. RESULTS In total, MARS ratings from 1,299 MHA covering 15 different health domains were pooled for the analysis. Confirmatory factor analysis confirmed a bifactor model with a general quality factor and an additional factor for each subdimension (RMSEA=0.074, TLI=0.922, CFI=0.940, SRMR=0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC=0.82). The overall MARS rating was positively associated with ENLIGHT (r=0.91, P<0.01) and user-ratings (r=0.14, P<0.01). CONCLUSIONS he psychometric evaluation of the MARS demonstrated its suitability for the quality assessment of MHAs. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Åsa Norman ◽  
Julie Wright ◽  
Emma Patterson

Abstract Background Brief scales to measure parental self-efficacy (PSE) in relation to children’s obesogenic behaviours have not been developed and validated using more rigorous methodology such as invariance testing, limiting their generalisability to sub-groups. This study aimed to assess the construct validity and measurement invariance of brief PSE scales for children’s intake of vegetables, soft drinks, and sweets, and physical activity. Methods Parents (n = 242) of five-to-seven-year-old children in disadvantaged and culturally diverse settings in Sweden responded to a questionnaire in Swedish with 12 items assessing PSE in relation to healthy and unhealthy behaviours. Construct validity was assessed with confirmatory factor analysis, invariance testing compared the scales by groups of parental sex, education, and child weight status. Criterion validity was evaluated using objective measures of children’s physical activity and semi-objective measures of diet. Results Two-factor models showed moderate to excellent fit to the data. Invariance was supported across all groups for healthy behaviour scales. Unhealthy behaviour scales were invariant for all groups except parental education where partial metric invariance was supported. Scales were significantly correlated with physical activity and diet. Conclusion This study provides preliminary evidence for the validity of brief PSE scales and invariance across groups suggesting their utility for research and clinical management of weight-related behaviours.


SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A201-A202
Author(s):  
Kristina Puzino ◽  
Susan Calhoun ◽  
Allison Harvey ◽  
Julio Fernandez-Mendoza

Abstract Introduction The Sleep Inertia Questionnaire (SIQ) was developed and validated in patients with mood disorders to evaluate difficulties with becoming fully awake after nighttime sleep or daytime naps in a multidimensional manner. However, few data are available regarding its psychometric properties in clinical samples with sleep disorders. Methods 211 patients (43.0±16.4 years old, 68% female, 17% minority) evaluated at the Behavioral Sleep Medicine (BSM) program of Penn State Health Sleep Research & Treatment Center completed the SIQ. All patients were diagnosed using ICSD-3 criteria, with 111 receiving a diagnosis of chronic insomnia disorder (CID), 48 of a central disorder of hypersomnolence (CDH), and 52 of other sleep disorders (OSD). Structural equation modelling was used to conduct confirmatory factor analysis (CFA) of the SIQ. Results CFA supported four SIQ dimensions of “physiological”, “cognitive”, “emotional” and “response to” (RSI) sleep inertia with adequate goodness-of-fit (TLI=0.90, CFI=0.91, GFI=0.85, RMSEA=0.08). Internal consistency was high (α=0.94), including that of its dimensions (physiological α=0.89, cognitive α=0.94, emotional α=0.67, RSI α=0.78). Dimension inter-correlations were moderate to high (r=0.42–0.93, p&lt;0.01), indicating good construct validity. Convergent validity showed moderate correlations with Epworth sleepiness scale (ESS) scores (r=0.38) and large correlations with Flinders fatigue scale (FFS) scores (r=0.65). Criterion validity showed significantly (p&lt;0.01) higher scores in subjects with CDH (69.0±16.6) as compared to those with CID (54.4±18.3) or OSD (58.5±20.0). A SIQ cut-off score ≥57.5 provided a sensitivity/specificity of 0.77/0.65, while a cut-off score ≥61.5 provided a sensitivity/specificity of 0.71/0.70 to identify CDH vs. ESS&lt;10 (AUC=0.76). Conclusion The SIQ shows satisfactory indices of reliability and construct validity in a clinically-diverse sleep disorders sample. Its criterion validity is supported by its divergent association with hypersomnia vs. insomnia disorders, as well as its adequate sensitivity/specificity to identify patients with CDH. The SIQ can help clinicians easily assess the complex dimensionality of sleep inertia and target behavioral sleep treatments. Future studies should confirm the best SIQ cut-off score by including good sleeping controls, while clinical studies should determine its minimal clinically important difference after pharmacological or behavioral treatments. Support (if any):


Sign in / Sign up

Export Citation Format

Share Document