code tracing
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Zak Risha ◽  
Jordan Barria-Pineda ◽  
Kamil Akhuseyinoglu ◽  
Peter Brusilovsky
Keyword(s):  

Author(s):  
Guohua Shen ◽  
Haijuan Wang ◽  
Zhiqiu Huang ◽  
YaoShen Yu ◽  
Kai Chen

Requirements-to-code tracing is an important and costly task that creates trace links from requirements to source code. These trace links help engineers reduce the time and complexity of software maintenance. Code comments play an important role in software maintenance tasks. However, few studies have focused intensively on the impact of code comments on requirements-to-code trace links creation. Different types of comments have different purposes, so how different types of code comments provide different improvements for requirements-to-code trace links creation? We focus on learning whether code comments and different types of comments can improve the quality of trace links creation. This paper presents a study to evaluate the contribution of code comments and different types of code comments to the creation of trace links. More specifically, this paper first experimentally evaluates the impact of code comments on requirements-to-code trace links creation, and then divides code comments into six categories to evaluate its impact on trace links creation. The results show that the precision increases by an average of 15% (based on the same recall) after adding code comments (even for different trace links creation techniques), and the type of Purpose comments contributes more to the tracing task than the other five. This empirical study provides evidence that code comments are effective in tracing links creation, and different types of code comments contribute differently. Purpose comments can be used to improve the accuracy of requirements-to-code trace links creation.


2020 ◽  
Vol 10 (20) ◽  
pp. 7044
Author(s):  
Robert Pinter ◽  
Sanja Maravić Čisar ◽  
Attila Kovari ◽  
Lenke Major ◽  
Petar Čisar ◽  
...  

Computer adaptive testing (CAT) enables an individualization of tests and better accuracy of knowledge level determination. In CAT, all test participants receive a uniquely tailored set of questions. The number and the difficulty of the next question depend on whether the respondent’s previous answer was correct or incorrect. In order for CAT to work properly, it needs questions with suitably defined levels of difficulty. In this work, the authors compare the results of questions’ difficulty determination given by experts (teachers) and students. Bachelor students of informatics in their first, second, and third year of studies at Subotica Tech—College of Applied Sciences had to answer 44 programming questions in a test and estimate the difficulty for each of those questions. Analyzing the correct answers shows that the basic programming knowledge, taught in the first year of study, evolves very slowly among senior students. The comparison of estimations on questions difficulty highlights that the senior students have a better understanding of basic programming tasks; thus, their estimation of difficulty approximates to that given by the experts.


Sign in / Sign up

Export Citation Format

Share Document