Competence Assessment with Representations of Practice in Text, Comic and Video Format

Author(s):  
Marita Friesen ◽  
Sebastian Kuntze
Author(s):  
Randy K. Otto ◽  
Norman G. Poythress ◽  
Robert A. Nicholson ◽  
John F. Edens ◽  
John Monahan ◽  
...  

2010 ◽  
Vol 96 (3) ◽  
pp. 8-15 ◽  
Author(s):  
Elizabeth S. Grace ◽  
Elizabeth J. Korinek ◽  
Zung V. Tran

ABSTRACT This study compares key characteristics and performance of physicians referred to a clinical competence assessment and education program by state medical boards (boards) and hospitals. Physicians referred by boards (400) and by hospitals (102) completed a CPEP clinical competence assessment between July 2002 and June 2010. Key characteristics, self-reported specialty, and average performance rating for each group are reported and compared. Results show that, compared with hospital-referred physicians, board-referred physicians were more likely to be male (75.5% versus 88.3%), older (average age 54.1 versus 50.3 years), and less likely to be currently specialty board certified (80.4% versus 61.8%). On a scale of 1 (best) to 4 (worst), average performance was 2.62 for board referrals and 2.36 for hospital referrals. There were no significant differences between board and hospital referrals in the percentage of physicians who graduated from U.S. and Canadian medical schools. The most common specialties referred differed for boards and hospitals. Conclusion: Characteristics of physicians referred to a clinical competence program by boards and hospitals differ in important respects. The authors consider the potential reasons for these differences and whether boards and hospitals are dealing with different subsets of physicians with different types of performance problems. Further study is warranted.


RMD Open ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. e001183 ◽  
Author(s):  
Aurélie Najm ◽  
Alessia Alunno ◽  
Francisca Sivera ◽  
Sofia Ramiro ◽  
Catherine Haines

ObjectivesTo gain insight into current methods and practices for the assessment of competences during rheumatology training, and to explore the underlying priorities and rationales for competence assessment.MethodsWe used a qualitative approach through online focus groups (FGs) of rheumatology trainers and trainees, separately. The study included five countries—Denmark, the Netherlands, Slovenia, Spain and the United Kingdom. A summary of current practices of assessment of competences was developed, modified and validated by the FGs based on an independent response to a questionnaire. A prioritising method (9 Diamond technique) was then used to identify and justify key assessment priorities.ResultsOverall, 26 participants (12 trainers, 14 trainees) participated in nine online FGs (2 per country, Slovenia 1 joint), totalling 12 hours of online discussion. Strong nationally (the Netherlands, UK) or institutionally (Spain, Slovenia, Denmark) standardised approaches were described. Most groups identified providing frequent formative feedback to trainees for developmental purposes as the highest priority. Most discussions identified a need for improvement, particularly in developing streamlined approaches to portfolios that remain close to clinical practice, protecting time for quality observation and feedback, and adopting systematic approaches to incorporating teamwork and professionalism into assessment systems.ConclusionThis paper presents a clearer picture of the current practice on the assessment of competences in rheumatology in five European countries and the underlying rationale of trainers’ and trainees’ priorities. This work will inform EULAR Points-to-Consider for the assessment of competences in rheumatology training across Europe.


Author(s):  
Ragan Wilson ◽  
Christopher B. Mayhorn

With virtual reality’s emerging popularity and the subsequent push for more sports media experiences, there is a need to evaluate virtual reality’s use into more video watching experiences. This research explores differences in experiences between Monitor (2D) video and HMD (360-Degree) video footage by measuring user perceptions of presence, suspense, and enjoyment. Furthermore, this study examines the relationship between presence, game attractiveness, suspense, and enjoyment as explored by Kim, Cheong, and Kim (2016). Differences were assessed via a MANOVA examining specifically presence, suspense, and enjoyment while the relationships were explored via a confirmatory factor analysis. Results suggest that there was a difference between Monitor (2D) video and HMD (360-Degree) in regard to spatial presence, engagement, suspense, and enjoyment, but the previous model from Kim et al. (2016) was not a good fit to this study’s data.


2021 ◽  
Vol 11 (2) ◽  
pp. 128
Author(s):  
Sergej Lackmann ◽  
Pierre-Majorique Léger ◽  
Patrick Charland ◽  
Caroline Aubé ◽  
Jean Talbot

Millions of students follow online classes which are delivered in video format. Several studies examine the impact of these video formats on engagement and learning using explicit measures and outline the need to also investigate the implicit cognitive and emotional states of online learners. Our study compared two video formats in terms of engagement (over time) and learning in a between-subject experiment. Engagement was operationalized using explicit and implicit neurophysiological measures. Twenty-six (26) subjects participated in the study and were randomly assigned to one of two conditions based on the video shown: infographic video or lecture capture. The infographic video showed animated graphics, images, and text. The lecture capture showed a professor, providing a lecture, filmed in a classroom setting. Results suggest that lecture capture triggers greater emotional engagement over a shorter period, whereas the infographic video maintains higher emotional and cognitive engagement over longer periods of time. Regarding student learning, the infographic video contributes to significantly improved performance in matters of difficult questions. Additionally, our results suggest a significant relationship between engagement and student performance. In general, the higher the engagement, the better the student performance, although, in the case of cognitive engagement, the link is quadratic (inverted U shaped).


Author(s):  
Beatriz Sánchez-Sánchez ◽  
Beatriz Arranz-Martín ◽  
Beatriz Navarro-Brazález ◽  
Fernando Vergara-Pérez ◽  
Javier Bailón-Cerezo ◽  
...  

Therapeutic patient education programs must assess the competences that patients achieve. Evaluation in the pedagogical domain ensures that learning has taken place among patients. The Prolapse and Incontinence Knowledge Questionnaire (PIKQ) is a tool for assessing patient knowledge about urinary (UI) and pelvic organ prolapse (POP) conditions. The aim of this study was to translate the Prolapse and Incontinence Knowledge Questionnaire (PIKQ) into Spanish and test its measurement properties, as well as propose real practical cases as a competence assessment tool. The cross-cultural adaptation was conducted by a standardized translation/back-translation method. Measurement properties analysis was performed by assessing the validity, reliability, responsiveness, and interpretability. A total of 275 women were recruited. The discriminant validity showed statistically significant differences in the PIKQ scores between patients and expert groups. Cronbach’s alpha revealed good internal consistency. The test–retest reliability showed excellent correlation with UI and POP scales. Regarding responsiveness, the effect size, and standardized response mean demonstrated excellent values. No floor or ceiling effects were shown. In addition, three “real practical cases” evaluating skills in identifying and analyzing, decision making, and problem-solving were developed and tested. The Spanish PIKQ is a comprehensible, valid, reliable, and responsive tool for the Spanish population. Real practical cases are useful competence assessment tools that are well accepted by women with pelvic floor disorders (PFD), improving their understanding and their decision-making regarding PFD.


2021 ◽  
Vol 11 (8) ◽  
pp. 402
Author(s):  
Linda Helene Sillat ◽  
Kairit Tammets ◽  
Mart Laanpere

The rapid increase in recent years in the number of different digital competency frameworks, models, and strategies has prompted an increasing popularity for making the argument in favor of the need to evaluate and assess digital competence. To support the process of digital competence assessment, it is consequently necessary to understand the different approaches and methods. This paper carries out a systematic literature review and includes an analysis of the existing proposals and conceptions of digital competence assessment processes and methods in higher education, with the aim of better understanding the field of research. The review follows three objectives: (i) describe the characteristics of digital competence assessment processes and methods in higher education; (ii) provide an overview of current trends; and, finally, (iii) identify challenges and issues in digital competence assessment in higher education with a focus on the reliability and validity of the proposed methods. On the basis of the findings, and as a result of the COVID-19 pandemic, digital competence assessment in higher education requires more attention, with a specific focus on instrument validity and reliability. Furthermore, it will be of great importance to further investigate the use of assessment tools to support systematic digital competence assessment processes. The analysis includes possible opportunities and ideas for future lines of work in digital competence evaluation in higher education.


Sign in / Sign up

Export Citation Format

Share Document