scholarly journals Automated Assessment of Short One-Line Free-Text Responses In Computer Science

Author(s):  
Navjeet Kaur ◽  
Kiran Jyoti

Assessment is used to evaluate the learner’s knowledge about the concepts learnt. Evaluation through objective testing is common in all evaluation system, where Multiple Choice Questions, Fill in the Blanks, Matching etc. are used for evaluation. The method of objective testing is not sufficient to completely verify all the concepts learnt by the learner. Thus computer assisted assessment of short text answers is developed. Here we present an approach to assess the short answers of computer science automatically. In this paper we have defined a set of evaluation criteria that covers all the relevant aspects of a short text evaluation system.

Author(s):  
Navjeet Kaur ◽  
Kiran Jyoti

Assessment is an important activity in any educational process to evaluate the learner’s knowledge about the concepts learnt.. Evaluation through objective testing is common in all evaluation system, where Multiple Choice Questions, Fill in the Blanks, Matching etc. are used for evaluation. The method of objective testing is not sufficient to completely verify all the concepts learnt by the learner. Thus computer assisted assessment of short text answers is developed. Here we present a technique which also considers grammatical errors during automated evaluation of oneline sentence. In this paper we have define a set of evaluation criteria that covers all the relevant aspects of an essay assessment system and discussion on how this technique find syntactical errors during evaluation of student response.


Author(s):  
Gennaro Costagliola ◽  
Filomena Ferrucci ◽  
Vittorio Fuccella

Online Testing, also known as Computer Assisted Assessment (CAA), is a sector of e-learning aimed at assessing learner’s knowledge through e-learning means. In recent years, the means for knowledge evaluation have evolved in order to satisfy the necessity of evaluating a big mass of learners in strict times: objective tests, more rapidly assessable, have gained a heavier weight in the determination of learners’ results. Multiple Choice question type is extremely popular in objective tests, since, among other advantages, a large number of tests based on it can be easily corrected automatically. These items are composed of a stem and a list of options. The stem is the text that states the question. The only correct answer is called the key, whilst the incorrect answers are called distractors (Woodford & Bancroft, 2005).


Author(s):  
Clara-Sophie Schwarz ◽  
Nikolai Münch ◽  
Johannes Müller-Salo ◽  
Stefan Kramer ◽  
Cleo Walz ◽  
...  

AbstractWorking with the dead is a very specific kind of work. Although a dignified handling of the corpses is demanded by the legislator and by the general public, neither the legal status of the corpse is undisputed nor is it obvious what a dignified handling of the deceased should consist of. In our hypothesis generating pilot study, we asked which concrete considerations are involved in daily practice of forensic specialists. We used an online questionnaire (invitations via e-mail) consisting of questions with single choice, multiple choice, and free text entries. The answers to single or multiple choice questions were displayed in pivot tables. The data was thus summarized, viewed, descriptively analyzed, and displayed together with the free text answers. 84.54% of the physicians and 100% of the autopsy assistants stated that considerations concerning the dignity of the deceased should play a role in daily autopsy practice. 45.87% stated that the conditions surrounding the autopsy need improvement to be ethically suitable. The analysis of the survey’s results was based on Robert Audi’s ethics, according to which three aspects need to be lightened in order to evaluate the conduct of a person morally: the actions, the motivation, and the way in which the actions are carried out. This systematization helps to identify the need for improvement and to make the vague demands for a dignified handling of corpses more concrete.


10.28945/4491 ◽  
2020 ◽  
Vol 19 ◽  
pp. 001-029
Author(s):  
Rosalina Babo ◽  
Lurdes V. Babo ◽  
Jarkko T Suhonen ◽  
Markku Tukiainen

Aim/Purpose: The aim of this study is to understand student’s opinions and perceptions about e-assessment when the assessment process was changed from the traditional computer assisted method to a multiple-choice Moodle based method. Background: In order to implement continuous assessment to a large number of students, several shifts are necessary, which implies as many different tests as the number of shifts required. Consequently, it is difficult to ensure homogeneity through the different tests and a huge amount of grading time is needed. These problems related to the traditional assessment based on computer assisted tests, lead to a re-design of the assessment resulting in the use of multiple-choice Moodle tests. Methodology: A longitudinal, concurrent, mixed method study was implemented over a five-year period. A survey was developed and carried out by 815 undergraduate students who experienced the electronic multiple-choice questions (eMCQ) assessment in the courses of the IS department. Qualitative analyses included open-ended survey responses and interviews with repeating students in the first year. Contribution: This study provides a reflection tool on how to incorporate frequent moments of assessment in courses with a high number of students without overloading teachers with a huge workload. The research analysed the efficiency of assessing non-theoretical topics using eMCQ, while ensuring the homogeneity of assessment tests, which needs to be complemented with other assessment methods in order to assure that students develop and acquire the expected skills and competencies. Findings: The students involved in the study appreciate the online multiple-choice quiz assessment method and perceive it as fair but have a contradictory opinion regarding the preference of the assessment method, throughout the years. These changes in perception may be related to the improvement of the question bank and categorisation of questions according to difficulty level, which lead to the nullification of the ‘luck factor’. Other major findings are that although the online multiple-choice quizzes are used with success in the assessment of theoretical topics, the same is not in evidence regarding practical topics. Therefore, this assessment needs to be complemented with other methods in order to achieve the expected learning outcomes. Recommendations for Practitioners: In order to be able to evaluate the same expected learning outcomes in practical topics, particularly in technology and information systems subjects, the evaluator should complement the online multiple-choice quiz assessment with other approaches, such as a PBL method, homework assignments, and/or other tasks performed during the semester. Recommendation for Researchers: This study explores e-assessment with online multiple-choice quizzes in higher education. It provides a survey that can be applied in other institutions that are also using online multiple-choice quizzes to assess non-theorical topics. In order to better understand the students’ opinions on the development of skills and competencies with online multiple-choice quizzes and on the other hand with classical computer assisted assessment, it would be necessary to add questions concerning these aspects. It would then be interesting to compare the findings of this study with the results from other institutions. Impact on Society: The increasing number of students in higher education has led to a raised use of e-assessment activities, since it can provide a fast and efficient manner to assess a high number of students. Therefore, this research provides meaningful insight of the stakeholders’ perceptions of online multiple-choice quizzes about practical topics. Future Research: An interesting study, in the future, would be to obtain the opinions of a particular set of students on two tests, one of the tests using online multiple-choice quizzes and the other through a classical computer assisted assessment method. A natural extension of the present study is a comparative analysis regarding the grades obtained by students who performed one or another type of assessment (online multiple-choice quizzes vs. classical computer assisted assessment).


1978 ◽  
Vol 6 (3) ◽  
pp. 219-228 ◽  
Author(s):  
Glenn F. Cartwright ◽  
Jeffrey L. Derevensky

The study investigated the feasibility of interactive computer-assisted testing (ICAT) as an effective instructional method and its effects on attitudes towards computer-assisted instruction (CAI). In addition, the final performance scores were examined. Five computer quizzes consisting of twenty randomly drawn multiple-choice questions were individually administered on ten Teletype terminals. A feedback mechanism was incorporated in the ICAT program and provided detailed explanations of each item. The resulting low correlation between final performance and ICAT scores is discussed with reference to the feasibility, improvement and implementation of interactive computer-assisted testing.


10.28945/2304 ◽  
2015 ◽  
Vol 14 ◽  
pp. 237-254 ◽  
Author(s):  
John English ◽  
Tammy English

In this paper we discuss the use of automated assessment in a variety of computer science courses that have been taught at Israel Academic College by the authors. The course assignments were assessed entirely automatically using Checkpoint, a web-based automated assessment framework. The assignments all used free-text questions (where the students type in their own answers). Students were allowed to correct errors based on feedback provided by the system and resubmit their answers. A total of 141 students were surveyed to assess their opinions of this approach, and we analysed their responses. Analysis of the questionnaire showed a low correlation between questions, indicating the statistical independence of the individual questions. As a whole, student feedback on using Checkpoint was very positive, emphasizing the benefits of multiple attempts, impartial marking, and a quick turnaround time for submissions. Many students said that Checkpoint gave them confidence in learning and motivation to practise. Students also said that the detailed feedback that Checkpoint generated when their programs failed helped them understand their mistakes and how to correct them.


Author(s):  
Francisco Zampirolli ◽  
Valério Batista ◽  
Carla Rodriguez ◽  
Rafaela Vilela da Rocha ◽  
Denise Goya

Sign in / Sign up

Export Citation Format

Share Document