scholarly journals The Scarbase Duo ® : Intra-rater and inter-rater reliability and validity of a compact dual scar assessment tool

Burns ◽  
2016 ◽  
Vol 42 (2) ◽  
pp. 336-344 ◽  
Author(s):  
Matthew Fell ◽  
Jill Meirte ◽  
Mieke Anthonissen ◽  
Koen Maertens ◽  
Jonathon Pleat ◽  
...  
1999 ◽  
Vol 91 (1) ◽  
pp. 288-298 ◽  
Author(s):  
Armin Schubert ◽  
John E. Tetzlaff ◽  
Ming Tan ◽  
Victor J. Ryckman ◽  
Edward Mascha

Background Oral practice examinations (OPEs) are used extensively in many anesthesiology programs for various reasons, including assessment of clinical judgment. Yet oral examinations have been criticized for their subjectivity. The authors studied the reliability, consistency, and validity of their OPE program to determine if it was a useful assessment tool. Methods From 1989 through 1993, we prospectively studied 441 OPEs given to 190 residents. The examination format closely approximated that used by the American Board of Anesthesiology. Pass-fail grade and an overall numerical score were the OPE results of interest. Internal consistency and inter-rater reliability were determined using agreement measures. To assess their validity in describing competence, OPE results were correlated with in-training examination results and faculty evaluations. Furthermore, we analyzed the relationship of OPE with implicit indicators of resident preparation such as length of training. Results The internal consistency coefficient for the overall numerical score was 0.82, indicating good correlation among component scores. The interexaminer agreement was 0.68, indicating moderate or good agreement beyond that expected by chance. The actual agreement among examiners on pass-fail was 84%. Correlation of overall numerical score with in-training examination scores and faculty evaluations was moderate (r = 0.47 and 0.41, respectively; P < 0.01). OPE results were significantly (P < 0.01) associated with training duration, previous OPE experience, trainee preparedness, and trainee anxiety. Conclusion Our results show the substantial internal consistency and reliability of OPE results at a single institution. The positive correlation of OPE scores with in-training examination scores, faculty evaluations, and other indicators of preparation suggest that OPEs are a reasonably valid tool for assessment of resident performance.


2012 ◽  
Vol 02 (01) ◽  
pp. 26-30
Author(s):  
Malarvizhi G. ◽  
Manju Vatsa ◽  
Roseline M. ◽  
Nithin S. ◽  
Sarah Paul

AbstractEvaluation of pain in human neonate is complex and difficult because pain is a subjective phenomenon. Neonates cannot verbalize pain rather they express through cry and body movements. However measurement of pain provides a value of pain. There is no gold standard in pain assessment tool. JCAHO recommends selection of a valid and reliable and also age appropriate pain assessment tool.To determine the interobserver reliabity and clinical utility of NIPS scale To analyze the inter-rater reliability, reproducibility of NIPS scale between three observers prospective observation study was designed to establish reliability and validity of NIPS among 27 neonates who underwent venipuncture, Hepatitis vaccination (intramuscular) and heel prick at a tertiary care level III NICU. The baseline data and behavioral responses to procedural pain were rated by three observers trained in NIPS scale.At the end of 100 observation across the various time intervals and three phases, inter-rater reliability of NIPS scale among the three observers were. 82,.81,.75. Acceptable psychometric properties are reported for the tool which includes cronbach's alpha levels of. 9,.85,.9 between the observers. CNIPS was found to be highly reliable and valid multidimensional scale and practical and has good clinical utility.


2021 ◽  
Author(s):  
Emi Furukawa ◽  
Tsuyoshi Okuhara ◽  
Hiroko Okada ◽  
Ritsuko Shirabe ◽  
Rie Yokota ◽  
...  

Abstract Background: The Patient Education Materials Assessment Tool (PEMAT) systematically evaluates the understandability and actionability of patient education materials. This study aimed to develop a Japanese version of PEMAT and verify its reliability and validity.Methods: After assessing content validation, experts scored healthcare-related leaflets and videos according to PEMAT, to verify inter-rater reliability. In validation testing with laypeople, the high-scoring material group (n=800) was presented with materials that received high ratings on PEMAT, and the low-scoring material group (n=799) with materials that received low ratings. Both groups responded to understandability and actionability of the materials and perceived self-efficacy for the recommended actions.Results: The Japanese version of PEMAT showed strong inter-rater reliability (PEMAT-P: % agreement= 87.3, Gwet’s AC1=0.83. PEMAT-A/V: % agreement=85.7%, Gwet’s AC1=.80). The high-scoring material group had significantly higher scores for understandability and actionability than the low-scoring material group (PEMAT-P: understandability 6.53 vs. 5.96, p<.001; actionability 6.04 vs. 5.49, p<.001; PEMAT-A/V: understandability 7.65 vs. 6.76, p<.001; actionability 7.40 vs. 6.36, p<.001). Perceived self-efficacy increased more in the high-scoring material group than in the low-scoring material group.Conclusions: Our study showed that materials rated highly on PEMAT were also easy for laypeople to understand and action. The Japanese version of PEMAT can be used to assess and improve the usability of patient education materials.


Author(s):  
Andy Bell ◽  
Jennifer Kelly ◽  
Peter Lewis

Abstract:Purpose:Over the past two decades, the discipline of Paramedicine has seen expediential growth as it moved from a work-based training model to that of an autonomous profession grounded in academia.  With limited evidence-based literature examining assessment in paramedicine, this paper aims to describe student and academic views on the preference for OSCE as an assessment modality, the sufficiency of pre-OSCE instruction, and whether or not OSCE performance is a perceived indicator of clinical performance.Design/Methods:A voluntary, anonymous survey was conducted to examine the perception of the reliability and validity of the Objective Structured Clinical Examination (OSCE) as an assessment tool by students sitting the examination and the academics that facilitate the assessment. Findings:The results of this study revealed that the more confident the students are in the reliability and validity of the assessment, the more likely they are to perceive the assessment as an effective measure of their clinical performance.  The perception of reliability and validity differs when acted upon by additional variables, with the level of anxiety associated with the assessment and the adequacy of feedback of performance cited as major influencers. Research Implications:The findings from this study indicate the need for further paramedicine discipline specific research into assessment methodologies to determine best practice models for high quality assessment.Practical Implications:The development of evidence based best practice guidelines for the assessment of student paramedics should be of the upmost importance to a young, developing profession such as paramedicine.Originality/Value: There is very little research in the discipline specific area of assessment for paramedicine and discipline specific education research is essential for professional growth.Limitations:The principal researcher was a faculty member of one of the institutions surveyed.  However, all data was non identifiable at time of data collection.  Key WordsParamedic; paramedicine; objective structured clinical examinations; OSCE; education; assessment.


2020 ◽  
Vol 15 ◽  
Author(s):  
Dixon Thomas ◽  
Sherief Khalifa ◽  
Jayadevan Sreedharan ◽  
Rucha Bond

Background:: Clinical competence of pharmacy students is better evaluated at their practice sites. compared to the classroom. A clinical pharmacy competency evaluation rubric like that of the American College of Clinical Pharmacy (ACCP)is an effective assessment tool for clinical skills and can be used to show item reliability. The preceptors should be trained on how to use the rubrics as many inherent factors could influence inter-rater reliability. Objective:: To evaluate inter-rater reliability among preceptors on evaluating clinical competence of pharmacy students, before and after a group discussion intervention. Methods:: In this quasi experimental study in a United Arab Emirates teaching hospital, Seven clinical pharmacy preceptors rated clinical pharmacy competencies of ten recent PharmD graduates referring to their portfolios and preceptorship. Clinical pharmacy competencies were adopted from ACCP and mildly modified to be relevant for the local settings. Results:: Inter-rater reliability (Cronbach's Alpha) among preceptors was reasonable being practitioners at a single site for 2-4 years. At domain level, inter-rater reliability ranged from 0.79 - 0.93 before intervention and 0.94 - 0.99 after intervention. No inter-rater reliability was observed in relation to certain competency elements ranging from 0.31 – 0.61 before intervention, but improved to 0.79 – 0.97 after intervention. Intra-class correlation coefficient improved among all individual preceptors being reliable with each other after group discussion though some had no reliability with each other before group discussion. Conclusion:: Group discussion among preceptors at the training site was found to be effective in improving inter-rater reliability on all elements of the clinical pharmacy competency evaluation. Removing a preceptor from analysis did not affect inter-rater reliability after group discussion.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
G Lang

Abstract Background High quality health promotion (HP) depends on a competent workforce for which professional development programmes for practitioners are essential. The “CompHP Core Competencies Framework in HP” defines crucial competency domains but a recent review concluded that the implementation and use of the framework is lacking. The aim was to develop and validate a self-assessment tool for HP competencies, which should help evaluate training courses. Methods A brief self-assessment tool was employed in 2018 in Austria. 584 participants of 77 training courses submitted their post-course assessment (paper-pencil, RR = 78.1%). In addition, longitudinal data are available for 148 participants who filled in a pre-course online questionnaire. Measurement reliability and validity was tested by single factor, bifactor, multigroup, and multilevel CFA. A SEM proved for predictive and concurrent validity, controlling gender and age. Results A bifactor model (X2/df=3.69, RMSEA=.07, CFI=.95, sRMR=.07) showed superior results with a strong general CompHP factor (FL&gt;.65, wH=.90, ECV=.85), configurally invariant for two training programmes. On course level, there was only minimal variance between trainings (ICC&lt;.08). Structurally, there was a significant increase in HP competencies when comparing pre- and post-course measurements (b=.33, p&lt;.01). Participants showed different levels of competencies due to prior knowledge (b=.38, p&lt;.001) and course format (b=.16, p&lt;.06). The total scale had good properties (m = 49.8, sd = 10.3, 95%-CI: 49.0-50.7) and discriminated between groups (eg by training length). Conclusions The results justify the creation of an overall scale to assess core HP competencies. It is recommended to use the scale for evaluating training courses. The work compensates for the lack of empirical studies on the CompHP concept and facilitates a broader empirical application of a uniform competency framework for HP in accordance with international standards in HP and public health. Key messages The self-assessment tool provides a good and compact foundation for assessing HP competencies. It provides a basis for holistic, high quality and sustainable capacity building or development in HP.


2020 ◽  
Vol 41 (5) ◽  
pp. e597-e602
Author(s):  
Yazeed Al-shawi ◽  
Tamer A. Mesallam ◽  
Rayan Alfallaj ◽  
Turki Aldrees ◽  
Nouf Albakheet ◽  
...  

2016 ◽  
Vol 77 (1) ◽  
pp. 17-24 ◽  
Author(s):  
Brian K.C. Lo ◽  
Leia Minaker ◽  
Alicia N.T. Chan ◽  
Jessica Hrgetic ◽  
Catherine L. Mah

Purpose: To adapt and validate a survey instrument to assess the nutrition environment of grab-and-go establishments at a university campus. Methods: A version of the Nutrition Environment Measures Survey for grab-and-go establishments (NEMS-GG) was adapted from existing NEMS instruments and tested for reliability and validity through a cross-sectional assessment of the grab-and-go establishments at the University of Toronto. Product availability, price, and presence of nutrition information were evaluated. Cohen’s kappa coefficient and intra-class correlation coefficients (ICC) were assessed for inter-rater reliability, and construct validity was assessed using the known-groups comparison method (via store scores). Results: Fifteen grab-and-go establishments were assessed. Inter-rater reliability was high with an almost perfect agreement for availability (mean κ = 0.995) and store scores (ICC = 0.999). The tool demonstrated good face and construct validity. About half of the venues carried fruit and vegetables (46.7% and 53.3%, respectively). Regular and healthier entrée items were generally the same price. Healthier grains were cheaper than regular options. Six establishments displayed nutrition information. Establishments operated by the university’s Food Services consistently scored the highest across all food premise types for nutrition signage, availability, and cost of healthier options. Conclusions: Health promotion strategies are needed to address availability and variety of healthier grab-and-go options in university settings.


Sign in / Sign up

Export Citation Format

Share Document