formative testing
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 5)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Elis Safitri ◽  
Usep Kustiawan ◽  
Suryadi Suryadi

Abstract: The background of this research and development is the underutilization of learning facilities and infrastructure such as game media. The aim of this research and development is to produce busy bag educational toys for fine motor skills of children aged 3-4 years. The research methods used through the Borg and Gall procedures are selected and adjusted to field conditions through 6 steps. Based on formative testing for early childhood material experts obtained a percentage of 92,5 percent, early childhood game experts obtained a percentage of 85 percent. From small group trials the percentage of user data was obtained 95,3 percent, 96 percent convenience, 92 percent attractiveness, 98 percent security. The result showed that this busy bag educational game is very easy, interesting, and safe so it is suitable to be used to stimulate fine motor skills of children aged 3-4 years. Abstrak: Latar belakang penelitian dan pengembangan ini adalah kurang dimanfaatkannya sarana dan prasarana pembelajaran seperti media permainan. Tujuan dari penelitian dan pengembangan ini adalah untuk menghasilkan alat permainan edukatif busy bag untuk kemampuan motorik halus anak usia 3-4 tahun. Metode penelitian yang digunakan melalui prosedur Borg and Gall yang dipilih dan disesuaikan dengan kondisi di lapangan melalui 6 langkah. Berdasarkan uji formatif kepada ahli materi anak usia dini diperoleh persentase 92,5 persen, ahli permainan anak usia dini diperoleh persentase 85 persen. Dari hasil uji coba kelompok kecil diperoleh data pengguna 95,3 persen, kemudahan 96 persen, kemenarikan 92 persen, keamanan 98 persen. Hasil penelitian menunjukkan bahwa alat permainan edukatif busy bag ini sangat mudah, menarik, dan aman sehingga layak digunakan untuk menstimulasi kemampuan motorik halus anak usia 3-4 tahun.


2021 ◽  
Vol 8 (4) ◽  
Author(s):  
Nadia Nouri

While most previous studies attribute academic achievement to external and internal motivation, the present study associates academic achievement to language proficiency of EFL learners. Students’ underachievement of summative tests might be explained with their lack of English language proficiency. Students may have some gaps in the language that have been accumulated since elementary or high school. These gaps become fossilized in students’ minds if not addressed properly. In this regard, this study presents a testing method that leads to a remedial teaching program at the undergraduate level of the English studies at the Moroccan EFL University. <p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu_01/0770/a.php" alt="Hit counter" /></p>


2020 ◽  
pp. 147572572097120
Author(s):  
Natalie Enders ◽  
Robert Gaschler ◽  
Veit Kubik

Online-quizzes are an economic and objective method for formative assessment in universities. However, closed questions have been criticized for promoting shallow learning and resulting often in poor learning outcomes. These disadvantages can be overcome by embedding closed questions in effective instructional designs involving feedback. In the present field study, a final sample of N = 496 students completed the same online quiz, consisting of 60 true–false statements on the biological bases of psychology in two sessions. In order to enhance the benefit of formative testing on students’ test achievement in Session 2, students received elaborate feedback (i.e., by providing explanations for the in-/correctness) for half of their answers in Session 1, and corrective feedback (i.e., just indicating the in-/correctness) for the other half. The results showed that students scored higher in Session 2 if elaborate feedback had been provided in Session 1, compared with when corrective feedback was provided. More specifically, students profited more from elaborate feedback on incorrect answers in Session 1 than from feedback on correct answers. As a practical recommendation, self-administered formative tests with closed-question format should at least provide explanations why students’ answers are incorrect.


Author(s):  
Mary Beth Privitera

The aim of this paper is to share experiences in answering the most basic of questions, fundamental to human factors studies, who are our users? In social science and human factors literature, the bounding of user groups and the delineation of user characteristics are the key to successful research. Utilizing this information coupled with industry experience, a scientific approach to fully describing and justifying user groups is communicated. The process begins with determining individual capabilities such as demographics as well as perceptual, cognitive and physical capabilities of an individual user. Then considering the conditions that may impact the individual capabilities. Finally, considering the dynamic influencers such as beliefs, attitudes, opinions, emotions, unique situations and events, which can further impact use behavior. Taking this methodical approach throughout the design process, starting with the formative testing thus provides further justification for user group determination. This diligent approach towards determining user groups is paramount to a successful human factors validation study and fundamental to a robust HFE process.


Author(s):  
Mark Gierl ◽  
Okan Bulut ◽  
Xinxin Zhang

Computerized testing provides many benefits to support formative assessment in higher education. However, the advent of computerized formative testing has raised daunting new challenges, particularly in the areas of item development and test construction. Large numbers of items are required because they are continuously administered to students. Automatic item generation is a relatively new but rapidly evolving assessment technology that may be used to address this challenge. Once the items are generated, tests must be assembled that measure the same content areas with the same difficulty level using different sets of items. Automated test assembly is an assessment technology that may be used to address this challenge. To date, the use of automated methods for item development and test construction has been limited. The purpose of this chapter is to address these limitations by describing and illustrating how recent advances in the technology of assessment can be used to permit computerized formative testing to promote personalized learning.


2017 ◽  
Vol 42 (1) ◽  
pp. 42-57 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai

Computerized testing provides many benefits to support formative assessment. However, the advent of computerized formative testing has also raised formidable new challenges, particularly in the area of item development. Large numbers of diverse, high-quality test items are required because items are continuously administered to students. Hence, hundreds of items are needed to develop the banks necessary for computerized formative testing. One promising approach that may be used to address this test development challenge is automatic item generation. Automatic item generation is a relatively new but rapidly evolving research area where cognitive and psychometric modeling practices are used to produce items with the aid of computer technology. The purpose of this study is to describe a new method for generating both the items and the rationales required to solve the items to produce the required feedback for computerized formative testing. The method for rationale generation is demonstrated and evaluated in the medical education domain.


2017 ◽  
Vol 41 (1) ◽  
pp. 110-119 ◽  
Author(s):  
Jonathan D. Kibble

The goal of this review is to highlight key elements underpinning excellent high-stakes summative assessment. This guide is primarily aimed at faculty members with the responsibility of assigning student grades and is intended to be a practical tool to help throughout the process of planning, developing, and deploying tests as well as monitoring their effectiveness. After a brief overview of the criteria for high-quality assessment, the guide runs through best practices for aligning assessment with learning outcomes and compares common testing modalities. Next, the guide discusses the kind of validity evidence needed to support defensible grading of student performance. This review concentrates on how to measure the outcome of student learning; other reviews in this series will expand on the related concepts of formative testing and how to leverage testing for learning.


Sign in / Sign up

Export Citation Format

Share Document