scholarly journals Expert recommendations for the design of a teacher-oriented movement assessment tool for children aged 4-7 years: a Delphi study

Author(s):  
Tom Van Rossum ◽  
Lawrence Foweather ◽  
Spencer Hayes ◽  
David Richardson ◽  
David Morley
2018 ◽  
Vol 23 (2) ◽  
pp. 124-134
Author(s):  
Tom van Rossum ◽  
Lawrence Foweather ◽  
David Richardson ◽  
Spencer J. Hayes ◽  
David Morley

2018 ◽  
Vol 6 (1) ◽  
pp. 15-21 ◽  
Author(s):  
Patrick G Hughes ◽  
Steven Scott Atkinson ◽  
Mira F Brown ◽  
Marjorie R Jenkins ◽  
Rami A Ahmed

BackgroundGraduates of simulation fellowship programmes are expected to have the ability to perform a variety of simulation specific skills at the time of graduation. Currently, simulation fellowship directors have access to tools to assess the ability of a fellow to debrief learners. However, there is no tool to assess a simulation fellow’s competency in technical skills. The purpose of our manuscript was to develop and obtain content validation of a novel instrument designed to assess a simulation fellow’s ability to perform the five core simulation technical skills.MethodsThe study protocol was based on a methodology for content validation of curriculum consensus guidelines. This approach involves a three-step process, which includes the initial delineation of the curricular content. This was then followed by the validation of the curricular content using survey methodology and lastly obtaining consensus on modifications using Delphi methodology.ResultsTwo rounds of modified Delphi methodology were performed. Seventy-four respondents provided feedback on the round 1 survey and 45 respondents provided feedback on round 2. The final assessment tool has five elements and 16 subitems with four optional subitems.ConclusionThe Evaluation of Technical Competency in Healthcare Simulation tool provides an instrument developed from a national consensus of content experts. This tool provides simulation fellowship directors a method to evaluate fellows’ competency in technical skills.


Author(s):  
Abdallah Namoun ◽  
Ahmad Taleb ◽  
Mohammed Al-Shargabi ◽  
Mohamed Benaida

Measuring the effectiveness of a continuous quality improvement cycle in education is a cumbersome and sophisticated process. This article contributes a comprehensive self-assessment instrument for identifying the strengths and weaknesses of all phases of a continuous quality improvement cycle, including planning, data collection, analysis and reporting, and implementation of improvements. To this end, a four round Delphi study soliciting a total of 23 program quality experts from four universities was conducted. The produced survey instrument contains a total of 50 questions. The instrument may be used by quality experts in education to judge the quality of their continuous quality improvement cycle that endeavours to assess the attainment of learning outcomes in various undergraduate educational programs. Moreover, the instrument could be exploited to infer relevant user and system requirements and guide the development of an automated self-assessment tool aimed at identifying the shortcomings in educational continuous quality improvement cycles.


2012 ◽  
Vol 92 (6) ◽  
pp. 841-852 ◽  
Author(s):  
Alexandra De Kegel ◽  
Tina Baetens ◽  
Wim Peersman ◽  
Leen Maes ◽  
Ingeborg Dhooge ◽  
...  

Background Balance is a fundamental component of movement. Early identification of balance problems is important to plan early intervention. The Ghent Developmental Balance Test (GDBT) is a new assessment tool designed to monitor balance from the initiation of independent walking to 5 years of age. Objective The purpose of this study was to establish the psychometric characteristics of the GDBT. Methods To evaluate test-retest reliability, 144 children were tested twice on the GDBT by the same examiner, and to evaluate interrater reliability, videotaped GDBT sessions of 22 children were rated by 3 different raters. To evaluate the known-group validity of GDBT scores, z scores on the GDBT were compared between a clinical group (n=20) and a matched control group (n=20). Concurrent validity of GDBT scores with the subscale standardized scores of the Movement Assessment Battery for Children–Second Edition (M-ABC-2), the Peabody Developmental Motor Scales–Second Edition (PDMS-2), and the balance subscale of the Bruininks-Oseretsky Test–Second Edition (BOT-2) was evaluated in a combined group of the 20 children from the clinical group and 74 children who were developing typically. Results Test-retest and interrater reliability were excellent for the GDBT total scores, with intraclass correlation coefficients of .99 and .98, standard error of measurement values of 0.21 and 0.78, and small minimal detectable differences of 0.58 and 2.08, respectively. The GDBT was able to distinguish between the clinical group and the control group (t38=5.456, P<.001). Pearson correlations between the z scores on GDBT and the standardized scores of specific balance subscales of the M-ABC-2, PDMS-2, and BOT-2 were moderate to high, whereas correlations with subscales measuring constructs other than balance were low. Conclusions The GDBT is a reliable and valid clinical assessment tool for the evaluation of balance in toddlers and preschool-aged children.


2011 ◽  
Vol 67 (2) ◽  
Author(s):  
C. Joseph ◽  
C. Hendricks ◽  
J. Frantz

Background: evaluating  students’ clinical  performance  is  an integral part of the quality assurance in a physiotherapy curriculum, however, the objectivity during clinical examination have been questioned on numerous occasions.  The  aim  of  this  study  was  to  explore  the  essential  key  clinical performance areas and the associated assessment criteria in order to develop a reliable clinical assessment form.Methods: A Delphi study was used to obtain consensus on the development of a reliable clinical performance assessment tool. The study population consisted of  purposively  selected  academic  physiotherapy  staff  from  the  University  of Western Cape as well as supervisors and clinicians involved in the examination of  physiotherapy  students  from  the  three  Universities  in  the  Western Cape.  Findings  from  the  Delphi  rounds  were analysed  descriptively. Fifty  percent  or  higher  agreement  on  an  element  was  interpreted  as  an  acceptable  level of consensus.Results: Eight key performance areas were identified with five assessment criteria per key performance area as well as the weighting per area. It was evident that evaluators differed on the expectations of physiotherapy students as well as the criteria used to assess them.Conclusions: The Delphi panel contributed to the formulation of a clinical assessment form through the identification of  relevant  key  performance  areas  and  assessment  criteria  as  they  relate  to undergraduate  physiotherapy  training. Consensus on both aspects was reached following discussion and calculation of mean ranking sores.Implications: This process of reaching consensus in determining clear criteria for measuring key performance areas contributes to the objectivity of the process of examinations.


2018 ◽  
Vol 25 (2) ◽  
pp. 524-543
Author(s):  
David Morley ◽  
Thomas Van Rossum ◽  
David Richardson ◽  
Lawrence Foweather

A child’s early school years provide a crucial platform for them to develop fundamental movement skills (FMS), yet it has been acknowledged that there is a shortage of suitable FMS assessment tools for teachers to use within schools. To begin to address this shortfall, the purpose of this study was to elicit expert recommendations for the design of a FMS assessment tool for use by primary school teachers. A multi-phase research design was used, involving two scenario-guided focus groups with movement experts ( n = 8; five academics and three practitioners). Data captured in both focus groups were transcribed verbatim and thematically analysed. Three dichotomous dilemmas emerged from the data in relation to assessing children’s movement competence: (a) Why? For research purposes or to enhance teaching and learning?; (b) How? Should the assessment setting be engineered or natural?; and (c) What? Should the detail of the assessment be complex or simple and should the nature of the tasks be static or dynamic? These findings suggest that any future development of movement competence assessment protocols for use by primary school teachers needs to consider the specific purpose and context of the assessment.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ayesha Younas ◽  
Rehan Ahmed Khan ◽  
Raheela Yasmin

Abstract Background Competency based curricula across the globe stress on the importance of effective physician patient communication. A variety of courses have been used to train physicians for this purpose. However, few of them link competencies with practice resulting in confusions in implementation and assessment. This issue can be resolved by treating certain specific patient communication related tasks as acts of entrustment or entrustable professional activities (EPAs). In this study, we aimed to define a competency-based framework for assessing patient physician communication using the language of EPAs. Methods A modified Delphi study was conducted in three stages. The first stage was an extensive literature review to identify and elaborate communication related tasks which could be treated as EPAs. The second stage was content validation by medical education experts for clarity and representativeness. The third stage was three iterative rounds of modified Delphi with predefined consensus levels. The McNemar test was used to check response stability in the Delphi Rounds. Results Expert consensus resulted in development of 4 specific EPAs focused on physician-patient communication with their competencies and respective assessment strategies all aiming for level 5 of unsupervised practice. These include Providing information to the patient or their family about diagnosis or prognosis; Breaking Bad news to the patient or their family; Counseling a patient regarding their disease or illness; Resolving conflicts with patients or their families. Conclusions The EPAs for Physician-patient communication are a step toward an integrative, all-inclusive competency-based assessment framework for patient-centered care. They are meant to improve the quality of physician patient interaction by standardizing communication as a decision of entrustment. The EPAs can be linked to competency frameworks around the world and provide a useful assessment framework for effective training in patient communication. They can be integrated into any post graduate curriculum and can also serve as a self-assessment tool for postgraduate training programs across the globe to improve their patient communication curricula.


Author(s):  
Abdallah Namoun ◽  
Ahmad Taleb ◽  
Mohammed Al-Shargabi ◽  
Mohamed Benaida

Measuring the effectiveness of a continuous quality improvement cycle in education is a cumbersome and sophisticated process. This article contributes a comprehensive self-assessment instrument for identifying the strengths and weaknesses of all phases of a continuous quality improvement cycle, including planning, data collection, analysis and reporting, and implementation of improvements. To this end, a four round Delphi study soliciting a total of 23 program quality experts from four universities was conducted. The produced survey instrument contains a total of 50 questions. The instrument may be used by quality experts in education to judge the quality of their continuous quality improvement cycle that endeavours to assess the attainment of learning outcomes in various undergraduate educational programs. Moreover, the instrument could be exploited to infer relevant user and system requirements and guide the development of an automated self-assessment tool aimed at identifying the shortcomings in educational continuous quality improvement cycles.


2019 ◽  
Vol 2 (1) ◽  
pp. e187235 ◽  
Author(s):  
Christa Einspieler ◽  
Fabiana Utsch ◽  
Patricia Brasil ◽  
Carolina Y. Panvequio Aizawa ◽  
Colleen Peyton ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document