Quality of Instruments Used to Assess Competencies in Athletic Training

2012 ◽  
Vol 7 (4) ◽  
pp. 187-197 ◽  
Author(s):  
Jim F. Schilling

Context: An emphasis on knowledge and skill competency acquisition continues to gain importance in allied health professions. Accuracy and fairness in the summative assessment of competencies are essential to ensure student competence. A positive demonstration of validity, reliability, and authentic quality criteria are needed to achieve evidence-based practice considerations in the assessment of competencies. Objective: To present a variety of instruments used in the assessment of competencies established in the fifth edition of the athletic training competencies document and judge them based on validity, reliability, and authenticity criteria. Data Sources: Literature reviewed for this article included published articles pertaining to the assessment of competencies used in health care professional programs. Data Synthesis: Self, written, and observation assessment methods with specific types of instruments for each category are used in the summative assessment of competencies. Quality of the assessment instruments are considered to ensure score authenticity, validity, and reliability of measures. The type of assessment instrument and its content was recommended depending on the level of competence, which was categorized according to the depth of understanding and complexity of skill in the competencies. Conclusions: There was no one-size-fits-all assessment method determined. Certain instruments demonstrated greater quality than others and were used depending on assessment goals and resources.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
T Krieger

Abstract From the literature and experience, we know that the quality of patient information material (PIM) has a direct impact on its utilization and therefore also on the acceptance and success of an intervention. In this brief introduction session (10 minutes), the innovative “integrated, cross-sectional psycho-oncology” (isPO) programme and the context of its implementation will be sketched. In the programmés development phase, isPO specific-PIM was developed and utilized in its early implementation phase. This will be presented to the audience. Next, an overview regarding the general PIM quality criteria: correctness of content, legibility, comprehensibility and usability in detail will be given. Finally, common guidelines, checklists and quality assessment instruments will be presented, and the role of the target group (participation degree) in the development or examination process will be critically worked out.



Author(s):  
Rudy de Barros Ahrens ◽  
Luciana da Silva Lirani ◽  
Antonio Carlos de Francisco

The purpose of this study was to validate the construct and reliability of an instrument to assess the work environment as a single tool based on quality of life (QL), quality of work life (QWL), and organizational climate (OC). The methodology tested the construct validity through Exploratory Factor Analysis (EFA) and reliability through Cronbach’s alpha. The EFA returned a Kaiser–Meyer–Olkin (KMO) value of 0.917; which demonstrated that the data were adequate for the factor analysis; and a significant Bartlett’s test of sphericity (χ² = 7465.349; Df = 1225; p ≤ 0.000). After the EFA; the varimax rotation method was employed for a factor through commonality analysis; reducing the 14 initial factors to 10. Only question 30 presented commonality lower than 0.5; and the other questions returned values higher than 0.5 in the commonality analysis. Regarding the reliability of the instrument; all of the questions presented reliability as the values varied between 0.953 and 0.956. Thus; the instrument demonstrated construct validity and reliability



Author(s):  
Ināra Laizāne

Issue of educational quality is becoming more and more actual in the whole world. One of the tasks of European Union in the field of education is to provide the quality of education. In this article the educational quality in science has been looked at. The aim and method of the research are revealed in the article. The research has been done with the aim to find out the students’ academic achievements in science and the tendencies of factors that influence them. With the help of quantitative method (questionnaire) the assessment method of academic achievements was explained. In order to measure students’ achievements, the results obtained by summative assessment were used. The tendencies of factors influencing academic achievements in science were explored by looking for correlative relevance that promote and delay students’ academic achievements in science. The data obtained in the research was analysed quantitatively and qualitatively.



2019 ◽  
Vol 4 (2) ◽  
pp. 300
Author(s):  
Aloysius Mering ◽  
Indri Astuti

This study aims to (1) describe clearly and comprehensively about the quality of non-cognitive assessment instruments made by elementary school teachers, (2) develop procedures for developing non-cognitive assessment instruments made by teachers, (3) develop non-cognitive assessment instruments made by teachers. To realize this goal, researchers used three structured research designs. The first design is survey research to describe the quality of non-cognitive assessment instruments made by teachers. The instruments studied are survey data, which are illuminated by non-cognitive instruments constructed by the teacher in the Lesson Plan (RPP). Furthermore, from the results of a review of the teacher's non-cognitive assessment instruments, a guidebook on the procedure for developing cognitive assessment instruments made by teachers will be developed. The development of the guidebook uses development procedures (R & D). In the third draft, the researcher and the teacher developed a non-cognitive assessment instrument in the workshop. This workshop is the application of the guidebook that has been prepared. The procedure for preparing instruments uses steps (a) development of instrument specifications, (b) instrument writing, (c) instrument review, (d) instrument assembly (for testing purposes), (e) instrument testing, (f) results analysis trial, (g) instrument selection and assembly, (h) printing instruments, (i) administration of instruments, and (j) preparation of scales and norms. The whole series of studies will produce outputs (a) research reports, financial reports, and logbooks, (b) articles that have been discussed, (c) guidelines for preparing non-cognitive assessment instruments made by teachers that can be used as teaching materials and alternative materials for drafting training assessment instruments, (d) scientific publications in accredited journals, (e) a collection of validated non-cognitive assessment instruments made by teachers.



Author(s):  
José María Arribas Estebaranz

Resumen:Las preguntas clásicas acerca de la evaluación: para qué evaluar, qué evaluar, cómo evaluar o quién ha de evaluar los aprendizajes adquiridos por los estudiantes sigue siendo un tema recurrente y polémico en la literatura pedagógica actual que se reaviva periódicamente con motivo de la publicación de las distintas evaluaciones nacionales o internacionales o las polémicas “reválidas” de la LOMCE.En el ámbito académico coexisten la evaluación formativa y la evaluación certificadora, estableciéndose entre ambas profundas imbricaciones de las que no podemos sustraernos. Ello condiciona, inevitablemente, e incluso polariza todo el proceso de Enseñanza-Aprendizaje. Ambas funciones no son, en absoluto, excluyentes sino complementarias; evaluamos, fundamentalmente, para mejorar; pero, ¿cómo mejorar sin saber de dónde partimos ni adónde hemos llegado? Ciertamente, la mera medición, aislada, descontextualizada, sin consecuencias es un ejercicio estéril que, en el mejor de los casos, solo produce pérdida de recursos y de tiempo, pero también es cierto que no es posible la valoración y consiguiente toma de decisiones en función de esa valoración si no se parte de un conocimiento profundo de aquello que se quiere valorar.De la calidad de la medición: validez de los indicadores seleccionados, validez y fiabilidad de los instrumentos de evaluación elegidos e idoneidad de las condiciones de aplicación va a depender la calidad de la evaluación y por consiguiente la eficacia de las actuaciones que de ella se deriven.Así pues, en este artículo intentaremos aportar, justificadamente, tanto reflexiones de carácter teórico como útiles consideraciones de orden práctico que se debieran tener en cuenta a la hora de recoger y valorar los aprendizajes adquiridos por los estudiantes.Abstract:The classic questions about evaluation: why to evaluate, what to evaluate, how to evaluate, or who is to evaluate the learning acquired by the students are still a recurrent and controversial topic in the current pedagogic literature that periodically revives due to the publication of the different national or international assessments or the controversial "reválidas" of the LOMCE.In the academic field coexist formative assessment and certification assessment, establishing between the two deep overlapping of which we cannot ignore. This condition inevitably polarizes even entire E-A process. Both functions are not at all exclusive but complementary; we evaluate primarily to improve, but how to improve without knowing where we started from or where we come? Certainly, the mere measurement, isolated, decontextualized, without consequences is a sterile exercise, which in the best case only leads to loss of resources and time, on the other hand it is also true that it is not possible the assessment and subsequent decision making based on that valuation if it is not part of a thorough understanding of what we want to assess.The quality of the measurement-validity of selected indicators, validity and reliability of assessment instruments chosen and suitability of the conditions of application, will depend on the quality of assessment and therefore the effectiveness of the actions that she derived.Thus, in this article we will try to provide, justifiably, theoretical thoughts as well as useful practical considerations that should be considered when it comes to collect and evaluate the learning acquired by the students.





2021 ◽  
Vol 54 (1) ◽  
pp. e173770
Author(s):  
Dario Cecilio-Fernandes ◽  
Angélica Maria Bicudo ◽  
Pedro Tadao Hamamoto Filho

Progress test has been created with the necessity of an assessment method align with problem-based learning. Although it was specifically created to overcome the limitations of traditional assessment for problem-based learning, nowadays is used by different type of curricula. In this paper, we first present the basic assumptions, the history, benefit and challenges of the progress test. Progress test overcomes many limitations of traditional assessment, such as validity and reliability. However, the implementation of progress test is a logistical challenge. In addition, we discuss the limitation of progress test when used as summative assessment, which may not always be aligned with constructivist theory. When adding feedback and methods of analysis that considers multiple testing, progress test is then align with constructivist theory. Finally, the use of progress test’s sub scores may lack validity because of the low number of items; thus, pass/fail decision should not be based on the sub scores, but only on general scores.



2019 ◽  
Vol 9 (1) ◽  
pp. 91-100
Author(s):  
Nurhayati Nurhayati ◽  
◽  
Wahyudi Wahyudi ◽  
Syarif Lukman Hakim ◽  
◽  
...  

This study aims to 1) produce a HOTS assessment instrument; 2) knowing the quality of the test instrument in terms of the feasibility of construction, material feasibility, and language feasibility according to the expert; and 3) knowing the quality of the test items in terms of validity, reliability, difficulty level and distinguishing power based on the test results. Research and Development ware used as a research method with 4D procedural development model consists of four stages, namely: the define stage, the design stage, the development stage, and the dissemination stage. The questionnaire was used for expert judgment validation. The characteristics measurement of the HOTS items instrument including the validity, reliability, difficulty level and distinguishing power of the questions. The HOTS assessment instrument developed was in the form of multiple-choice options with a reason based on HOTS in aspects of analyzing, evaluating and creating. The results of expert validation show that the average item with criteria is very good in terms of content, construct and language aspects. Instruments that have been validated and revised were tested on students who had studied vibration and wave material and the test results showed that 77% of the questions developed were of good quality with valid criteria, good distinguishing criteria, level of difficulty at moderate and easy levels and very strong reliability so that feasible and ready to be used to measure students' higher order thinking skills in vibrations and waves material. Keywords: HOTS, Instruments test, Vibrations and waves



2021 ◽  
Vol 5 (1) ◽  
pp. 055
Author(s):  
Amaliyah Amaliyah ◽  
Ahmad Hakam ◽  
Suci Nur Pratiwi ◽  
Sari Wulandari

The PAI micro teaching course contains competencies that enable students to empower and implement teacher competencies, including personal, social, pedagogical and professional competencies. This research focuses on developing social skills assessment instruments because: first, the values ​​of social skills are one of the main assets for Islamic Education teacher candidates to interact with the social environment, the second teacher is someone who gives empathy and cares for students. This research aims to describe the steps for developing a social skills assessment instrument and testing the content validity. The research method used the Borg and Gall development model. The process of content validity qualitatively uses peer assessments from PAI lecturers. The process of content validity quantitatively uses the Aikens Index, and  the reliability between raters uses the intraclass correlations coefficient. The results of the research describe: first the process and results of content validity qualitatively include suggestions from validators, the second is content validity quantitatively, include the validity and reliability scores.The results of the research are expected: social skills instruments are one of the tools to empower and develop social skills of PAI teacher candidates in learning activities and daily life.



Sign in / Sign up

Export Citation Format

Share Document