training evaluation
Recently Published Documents


TOTAL DOCUMENTS

458
(FIVE YEARS 104)

H-INDEX

27
(FIVE YEARS 1)

2021 ◽  
Vol 3 (3) ◽  
pp. 150-158
Author(s):  
Norbertus Tri Suswanto Saptadi

The task of the teacher is to improve the quality of learning in a systematic and controlled manner through the use of educational research. The results of interviews with the principal at SMPK Santa Clara Surabaya, have obtained information that most teachers have not been able to improve the quality of learning. Classroom Action Research (CAR) training is expected to be able to understand and apply educational research methods to help solve learning problems. The training methods include material explanation, assignment, proposal making, individual presentation, task discussion, presentation assessment and training evaluation. The results of the training evaluation conducted through the Google Form application showed that 7 participants (58.3%) stated it was sufficient, 3 participants (25%) said it was good, and 2 participants (16.7%) stated that it was very good in understanding and implementing classroom action research proposals based on training method used. The results of the assessment of the PTK proposal presentation showed a very good criterion value with details of the performance criteria of 5 people (41.6%), mastery of 3 participants (25%), quality of material 4 participants (33.3%), use of time 5 participants (41.6 %) and the ability to answer 4 participants (33.3%).


Author(s):  
Valentina Lucia La Rosa ◽  
Michał Ciebiera ◽  
Kornelia Zaręba ◽  
Enrique Reyes-Muñoz ◽  
Tais Marques Cerentini ◽  
...  

2021 ◽  
Vol 9 ◽  
Author(s):  
Tineke E. Dineen ◽  
Corliss Bean ◽  
Kaela D. Cranston ◽  
Megan M. MacPherson ◽  
Mary E. Jung

Background: Training programs must be evaluated to understand whether the training was successful at enabling staff to implement a program with fidelity. This is especially important when the training has been translated to a new context. The aim of this community case study was to evaluate the effectiveness of the in-person Small Steps for Big Changes training for fitness facility staff using the 4-level Kirkpatrick training evaluation model.Methods: Eight staff were trained to deliver the motivational interviewing-informed Small Steps for Big Changes program for individuals at risk of developing type 2 diabetes. Between August 2019 and March 2020, 32 clients enrolled in the program and were allocated to one of the eight staff. The Kirkpatrick 4-level training evaluation model was used to guide this research. Level one assessed staff satisfaction to the training on a 5-point scale. Level two assessed staff program knowledge and motivational interviewing knowledge/skills. Level three assessed staff behaviors by examining their use of motivational interviewing with each client. Level four assessed training outcomes using clients' perceived satisfaction with their staff and basic psychological needs support both on 7-point scales.Results: Staff were satisfied with the training (M = 4.43; SD = 0.45; range = 3.86–4.71). All learning measures demonstrated high post-training scores that were retained at implementation follow-up. Staff used motivational interviewing skills in practice and delivered the program at a client-centered level (≥6; M = 6.34; SD = 0.83; range = 3.75–7.80). Overall, clients perceived staff supported their basic psychological needs (M = 6.55; SD = 0.64; range = 6.17–6.72) and reported high staff satisfaction scores (M = 6.88; SD = 0.33; range = 6–7).Conclusion: The Small Steps for Big Changes training was successful and fitness facility staff delivered a motivational interviewing-informed program. While not all staff operated at a client-centered level, clients perceived their basic psychological needs to be supported. Findings support the training for future scale-up sites. Community fitness staff represent a feasible resource through which to run evidence-based counseling programs.


2021 ◽  
Vol 11 (2) ◽  
pp. 61-65
Author(s):  
Muneswary a/p Saminathan ◽  
Norhaida Mohd Suaib

Training evaluation can be defined as a way of measuring how well users learn and adapt to a system or software. Various methods have been developed to carry out training evaluations of systems or software over the past few decades. A systematic literature review report on the assessment training model was conducted to give different views on the usability aspects of the proposed approach. This study provides a current systematic review of training evaluation on skill-based system or software. The particular purpose of the review is to explore the research as preliminary step that helps in choosing the right type of training evaluation model for skill-based E-learning system or software. There is a lack of appropriate models available through the specific gaps in literature and finding especially for skill-based E-learning system evaluation.


2021 ◽  
Vol 13 (20) ◽  
pp. 11434
Author(s):  
Sara Molgora ◽  
Chiara Fusar Poli ◽  
Giancarlo Tamanza

This contribution illustrates the training evaluation system developed within the Master’s Program in Family and Community Mediation at Università Cattolica del Sacro Cuore in Milan. This is an interim evaluation which focuses on the training process and which considers the collaboration with the subjects of the training to be fundamental. The peculiarity of this work concerns the possibility of inserting research within the training process, following a logic of mutual enrichment both in terms of content and learning. The contribution illustrates in detail the outcome and the process evaluation system, defining the perspective, the objectives, and the methodology of implementation. In particular, the outcome evaluation focuses on the distal and proximal outcomes of the training, while the process analysis focuses on the dynamics within the group of participants. Preliminary findings from 33 participants highlighted that the majority of participants (19) have a “regular” profile, that is, they appreciate both the theoretical contents, as well as the practical activities proposed during the training program. As for the process, the findings showed the importance of reflexivity as a major factor of change. Although these findings are referred to the specific experience of a particular group, and so further evaluations involving different training groups and other training processes are needed, this training evaluation system sheds light on both the topic and the context in which training is delivered. The integration between different points of view and several levels of analysis allows the researchers to deepen the individual path of each participant as well as to obtain feedbacks on the progress of the training group as a whole and allows participants to better understand their contexts of work thanks to the use of reflexivity. This can guarantee a sustainable growth both at individual and interpersonal level.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Aljawharah Alsalamah ◽  
Carol Callinan

Purpose A number of studies on Kirkpatrick’s four-level training evaluation model have been published, since its inception in 1959, either investigating it or applying it to evaluate the training process. The purpose of this bibliometric analysis is to reconsider the model, its utility and its effectiveness in meeting the need to evaluate training activities and to explain why the model is still worth using even though other later models are available. Design/methodology/approach This study adopts a “5Ws+1H” model (why, when, who, where, what and how); however, “when” and “how” are merged in the methodology. A total of 416 articles related to Kirkpatrick’s model published between 1959 and July 2020 were retrieved using Scopus. Findings The Kirkpatrick model continues to be useful, appropriate and applicable in a variety of contexts. It is adaptable to many training environments and achieves high performance in evaluating training. The overview of publications on the Kirkpatrick model shows that research using the model is an active and growing area. The model is used primarily in the evaluation of medical training, followed by computer science, business and social sciences. Originality/value This paper presents a comprehensive bibliometric analysis to reconsider the model, its utility, its effectiveness in meeting the need to evaluate training activities, its importance in the field measured by the growth in studies on the model and its applications in various settings and contexts.


Sign in / Sign up

Export Citation Format

Share Document