Promoting Code Quality via Automated Feedback on Student Submissions

Author(s):  
Oscar Karnalim ◽  
Simon
2016 ◽  
Author(s):  
Jill Burstein ◽  
Beata Beigman Klebanov ◽  
Norbert Elliot ◽  
Hillary Molloy

Author(s):  
Pierpaolo Vittorini ◽  
Stefano Menini ◽  
Sara Tonelli

AbstractMassive open online courses (MOOCs) provide hundreds of students with teaching materials, assessment tools, and collaborative instruments. The assessment activity, in particular, is demanding in terms of both time and effort; thus, the use of artificial intelligence can be useful to address and reduce the time and effort required. This paper reports on a system and related experiments finalised to improve both the performance and quality of formative and summative assessments in specific data science courses. The system is developed to automatically grade assignments composed of R commands commented with short sentences written in natural language. In our opinion, the use of the system can (i) shorten the correction times and reduce the possibility of errors and (ii) support the students while solving the exercises assigned during the course through automated feedback. To investigate these aims, an ad-hoc experiment was conducted in three courses containing the specific topic of statistical analysis of health data. Our evaluation demonstrated that automated grading has an acceptable correlation with human grading. Furthermore, the students who used the tool did not report usability issues, and those that used it for more than half of the exercises obtained (on average) higher grades in the exam. Finally, the use of the system reduced the correction time and assisted the professor in identifying correction errors.


ZDM ◽  
2021 ◽  
Author(s):  
Sebastian Rezat

AbstractOne of the most prevalent features of digital mathematics textbooks, compared to traditional ones, is the provision of automated feedback on students’ solutions. Since feedback is regarded as an important factor that influences learning, this is often seen as an affordance of digital mathematics textbooks. While there is a large body of mainly quantitative research on the effectiveness of feedback in general, very little is known about how feedback actually affects students’ individual content specific learning processes and conceptual development. A theoretical framework based on Rabardel’s theory of the instrument and Vergnaud’s theory of conceptual fields is developed to study qualitatively how feedback actually functions in the learning process. This framework was applied in a case study of two elementary school students’ learning processes when working on a probability task from a German 3rd grade digital textbook. The analysis allowed detailed reconstruction of how students made sense of the information provided by the feedback and adjusted their behavior accordingly. This in-depth analysis unveiled that feedback does not necessarily foster conceptual development in the desired way, and a correct solution does not always coincide with conceptual understanding. The results point to some obstacles that students face when working individually on tasks from digital mathematics textbooks with automated feedback, and indicate that feedback needs to be developed in design-based research cycles in order to yield the desired effects.


2021 ◽  
Vol 11 (14) ◽  
pp. 6613
Author(s):  
Young-Bin Jo ◽  
Jihyun Lee ◽  
Cheol-Jung Yoo

Appropriate reliance on code clones significantly reduces development costs and hastens the development process. Reckless cloning, in contrast, reduces code quality and ultimately adds costs and time. To avoid this scenario, many researchers have proposed methods for clone detection and refactoring. The developed techniques, however, are only reliably capable of detecting clones that are either entirely identical or that only use modified identifiers, and do not provide clone-type information. This paper proposes a two-pass clone classification technique that uses a tree-based convolution neural network (TBCNN) to detect multiple clone types, including clones that are not wholly identical or to which only small changes have been made, and automatically classify them by type. Our method was validated with BigCloneBench, a well-known and wildly used dataset of cloned code. Our experimental results validate that our technique detected clones with an average rate of 96% recall and precision, and classified clones with an average rate of 78% recall and precision.


Sign in / Sign up

Export Citation Format

Share Document