scholarly journals The reliability and validity of an authentic motor skill assessment tool for early adolescent girls in an Australian school setting

2017 ◽  
Vol 20 (6) ◽  
pp. 590-594 ◽  
Author(s):  
Natalie Lander ◽  
Philip J. Morgan ◽  
Jo Salmon ◽  
Samuel W. Logan ◽  
Lisa M. Barnett
2019 ◽  
Vol 52 (02) ◽  
pp. 216-221
Author(s):  
Sheeja Rajan ◽  
Ranjith Sathyan ◽  
L. S. Sreelesh ◽  
Anu Anto Kallerey ◽  
Aarathy Antharjanam ◽  
...  

AbstractMicrosurgical skill acquisition is an integral component of training in plastic surgery. Current microsurgical training is based on the subjective Halstedian model. An ideal microsurgery assessment tool should be able to deconstruct all the subskills of microsurgery and assess them objectively and reliably. For our study, to analyze the feasibility, reliability, and validity of microsurgery skill assessment, a video-based objective structured assessment of technical skill tool was chosen. Two blinded experts evaluated 40 videos of six residents performing microsurgical anastomosis for arteriovenous fistula surgery. The generic Reznick's global rating score (GRS) and University of Western Ontario microsurgical skills acquisition/assessment (UWOMSA) instrument were used as checklists. Correlation coefficients of 0.75 to 0.80 (UWOMSA) and 0.71 to 0.77 (GRS) for interrater and intrarater reliability showed that the assessment tools were reliable. Convergent validity of the UWOMSA tool with the prevalidated GRS tool showed good agreement. The mean improvement of scores with years of residency was measured with analysis of variance. Both UWOMSA (p-value: 0.034) and GRS (p-value: 0.037) demonstrated significant improvement in scores from postgraduate year 1 (PGY1) to PGY2 and a less marked improvement from PGY2 to PGY3. We conclude that objective assessment of microsurgical skills in an actual clinical setting is feasible. Tools like UWOMSA are valid and reliable for microsurgery assessment and provide feedback to chart progression of learning. Acceptance and validation of such objective assessments will help to improve training and bring uniformity to microsurgery education.


2017 ◽  
Vol 49 (12) ◽  
pp. 2498-2505 ◽  
Author(s):  
NATALIE LANDER ◽  
PHILIP J. MORGAN ◽  
JO SALMON ◽  
LISA M. BARNETT

Author(s):  
Asha Dektor ◽  
Jane Littau ◽  
Kate Knudsen

With the introduction of conventional laparoscopic surgery and robotic surgery, surgical treatment options beyond traditional open surgery are increasing. If and when different minimally invasive surgical (MIS) techniques are learned vary from surgeon to surgeon. It is important for human factors researchers to have tools for measuring how visual-motor skills develop using MIS tools across different learning experiences. The current research examines a child movement skill assessment tool, the Peabody Developmental Motor Skills Scales – Second Edition (PDMS-2), as a metric for measuring visual motor skill development with adults learning to use laparoscopic or robotic tools. At a high-level, the PDMS-2 provided insight into which motor skills were supported by each set of MIS tools and which skills were impacted by differences in learning history. Supplemental measures must be paired with the PDMS-2 to better understand the mechanisms behind the observed patterns.


2021 ◽  
Author(s):  
Huiqi Song ◽  
Jing Jing Wang ◽  
Patrick WC Lau

Abstract Background: The assessment of perschoolers’ motor skills is essential to know young children’s motor development and evaluate the intervention effects of promotion in children’s sport activities. The purpose of this study was to review the motor skills assessment tools in Chinese preschool-aged children, compare them in the international context, and provide guidelines to find appropriate motor skill assessment tool in China. Methods: The comprehensive literature search was carried out in WANGFAGN, CNKI, VIP, ERIC, EMBASE, MEDLINE, Ovid PsycINFO, SPORTDiscus and BIOSIS previews databases. Relevant articles published between January 2000 and May 2020 were retrieved. Studies that described the discriminative and evaluative measures of motor skills among the population aged 3-6 years in China were included. Results: A total of 17 studies were included in this review describing 7 tools including 4 self-developed tools and 3 international tools used in China. TGMD-2 appeared in a large proportion of studies, international tools used in China were incomplete in terms of translation, verification of reliability and validity, item selection and the implementation. Regarding the self-constructed tools, the CDCC was the most utilized self-developed tool, but it was mainly applied in intellectual development assessment. Through the comparison between Chinese self-constructed and international tools, the construction of the CDCC and the Gross Motor Development Assessment Scale contained relatively complete development steps. The test content, validity and reliability, implementation instruction, and generalizability of self-constructed tools are still lacking. Conclusions: Both international and self-developed motor skills assessment tools have been rarely applied in China, available tools lack enough validation and appropriate adjustments. Cultural differences in motor development between Chinese and Western populations should be considered when constructing a Chinese localized MSAT.


Author(s):  
Andy Bell ◽  
Jennifer Kelly ◽  
Peter Lewis

Abstract:Purpose:Over the past two decades, the discipline of Paramedicine has seen expediential growth as it moved from a work-based training model to that of an autonomous profession grounded in academia.  With limited evidence-based literature examining assessment in paramedicine, this paper aims to describe student and academic views on the preference for OSCE as an assessment modality, the sufficiency of pre-OSCE instruction, and whether or not OSCE performance is a perceived indicator of clinical performance.Design/Methods:A voluntary, anonymous survey was conducted to examine the perception of the reliability and validity of the Objective Structured Clinical Examination (OSCE) as an assessment tool by students sitting the examination and the academics that facilitate the assessment. Findings:The results of this study revealed that the more confident the students are in the reliability and validity of the assessment, the more likely they are to perceive the assessment as an effective measure of their clinical performance.  The perception of reliability and validity differs when acted upon by additional variables, with the level of anxiety associated with the assessment and the adequacy of feedback of performance cited as major influencers. Research Implications:The findings from this study indicate the need for further paramedicine discipline specific research into assessment methodologies to determine best practice models for high quality assessment.Practical Implications:The development of evidence based best practice guidelines for the assessment of student paramedics should be of the upmost importance to a young, developing profession such as paramedicine.Originality/Value: There is very little research in the discipline specific area of assessment for paramedicine and discipline specific education research is essential for professional growth.Limitations:The principal researcher was a faculty member of one of the institutions surveyed.  However, all data was non identifiable at time of data collection.  Key WordsParamedic; paramedicine; objective structured clinical examinations; OSCE; education; assessment.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
G Lang

Abstract Background High quality health promotion (HP) depends on a competent workforce for which professional development programmes for practitioners are essential. The “CompHP Core Competencies Framework in HP” defines crucial competency domains but a recent review concluded that the implementation and use of the framework is lacking. The aim was to develop and validate a self-assessment tool for HP competencies, which should help evaluate training courses. Methods A brief self-assessment tool was employed in 2018 in Austria. 584 participants of 77 training courses submitted their post-course assessment (paper-pencil, RR = 78.1%). In addition, longitudinal data are available for 148 participants who filled in a pre-course online questionnaire. Measurement reliability and validity was tested by single factor, bifactor, multigroup, and multilevel CFA. A SEM proved for predictive and concurrent validity, controlling gender and age. Results A bifactor model (X2/df=3.69, RMSEA=.07, CFI=.95, sRMR=.07) showed superior results with a strong general CompHP factor (FL>.65, wH=.90, ECV=.85), configurally invariant for two training programmes. On course level, there was only minimal variance between trainings (ICC<.08). Structurally, there was a significant increase in HP competencies when comparing pre- and post-course measurements (b=.33, p<.01). Participants showed different levels of competencies due to prior knowledge (b=.38, p<.001) and course format (b=.16, p<.06). The total scale had good properties (m = 49.8, sd = 10.3, 95%-CI: 49.0-50.7) and discriminated between groups (eg by training length). Conclusions The results justify the creation of an overall scale to assess core HP competencies. It is recommended to use the scale for evaluating training courses. The work compensates for the lack of empirical studies on the CompHP concept and facilitates a broader empirical application of a uniform competency framework for HP in accordance with international standards in HP and public health. Key messages The self-assessment tool provides a good and compact foundation for assessing HP competencies. It provides a basis for holistic, high quality and sustainable capacity building or development in HP.


2004 ◽  
Vol 28 (1) ◽  
pp. 85-103 ◽  
Author(s):  
Golan Shahar ◽  
Sidney J. Blatt ◽  
David C. Zuroff ◽  
Gabriel P. Kuperminc ◽  
Bonnie J. Leadbeater

2005 ◽  
Vol 32 (3) ◽  
pp. 329-344 ◽  
Author(s):  
Fred Schmidt ◽  
Robert D. Hoge ◽  
Lezlie Gomes

The Youth Level of Service/Case Management Inventory (YLS/CMI) is a structured assessment tool designed to facilitate the effective intervention and rehabilitation of juvenile offenders by assessing each youth’s risk level and criminogenic needs. The present study examined the YLS/CMI’s reliability and validity in a sample of 107 juvenile offenders who were court-referred for mental health assessments. Results demonstrated the YLS/CMI’s internal consistency and interrater reliability. Moreover, the instrument’s predictive validity was substantiated on a number of recidivism measures for both males and females. Limitations of the current findings are discussed.


Sign in / Sign up

Export Citation Format

Share Document