A tool for executable code complexity visualization

Author(s):  
Ana Udovicic ◽  
Ratko Grbic ◽  
Dragan Samardzija ◽  
Istvan Papp
Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2338
Author(s):  
Júlio Medeiros ◽  
Ricardo Couceiro ◽  
Gonçalo Duarte ◽  
João Durães ◽  
João Castelhano ◽  
...  

An emergent research area in software engineering and software reliability is the use of wearable biosensors to monitor the cognitive state of software developers during software development tasks. The goal is to gather physiologic manifestations that can be linked to error-prone scenarios related to programmers’ cognitive states. In this paper we investigate whether electroencephalography (EEG) can be applied to accurately identify programmers’ cognitive load associated with the comprehension of code with different complexity levels. Therefore, a controlled experiment involving 26 programmers was carried. We found that features related to Theta, Alpha, and Beta brain waves have the highest discriminative power, allowing the identification of code lines and demanding higher mental effort. The EEG results reveal evidence of mental effort saturation as code complexity increases. Conversely, the classic software complexity metrics do not accurately represent the mental effort involved in code comprehension. Finally, EEG is proposed as a reference, in particular, the combination of EEG with eye tracking information allows for an accurate identification of code lines that correspond to peaks of cognitive load, providing a reference to help in the future evaluation of the space and time accuracy of programmers’ cognitive state monitored using wearable devices compatible with software development activities.


Author(s):  
JUAN CARLOS ESTEVA ◽  
ROBERT G. REYNOLDS

The goal of the Partial Metrics Project is the automatic acquisition of planning knowledge from target code modules in a program library. In the current prototype the system is given a target code module written in Ada as input, and the result is a sequence of generalized transformations that can be used to design a class of related modules. This is accomplished by embedding techniques from Artificial Intelligence into the traditional structure of a compiler. The compiler performs compilation in reverse, starting with detailed code and producing an abstract description of it. The principal task facing the compiler is to find a decomposition of the target code into a collection of syntactic components that are nearly decomposable. Here, nearly decomposable corresponds to the need for each code segment to be nearly independent syntactically from the others. The most independent segments are then the target of the code generalization process. This process can be described as a form of chunking and is implemented here in terms of explanation-based learning. The problem of producing nearly decomposable code components becomes difficult when target code module is not well structured. The task facing users of the system is to be able to identify well-structured code modules from a library of modules that are suitable for input to the system. In this paper we describe the use of inductive learning techniques, namely variations on Quinlan's ID3 system that are capable of producing a decision tree that can be used to conceptually distinguish between well poorly structured code. In order to accomplish that task a set of high-level concepts used by software engineers to characterize structurally understandable code were identified. Next, each of these concepts was operationalized in terms of code complexity metrics that can be easily calculated during the compilation process. These metrics are related to various aspects of the program structure including its coupling, cohesion, data structure, control structure, and documentation. Each candidate module was then described in terms of a collection of such metrics. Using a training set of positive and negative examples of well-structured modules, each described in terms of the appointed metrics, a decision tree was produced that was used to recognize other well-structured modules in terms of their metric properties. This approach was applied to modules from existing software libraries in a variety of domains such as database, editor, graphic, window, data processing, FFT and computer vision software. The results achieved by the system were then benchmarked against the performance of experienced programmers in terms of recognizing well structured code. In a test case involving 120 modules, the system was able to discriminate between poor and well-structured code 99% of the time as compared to an 80% average for the 52 programmers sampled. The results suggest that such an inductive system can serve as a practical mechanism for effectively identifying reusable code modules in terms of their structural properties.


IEEE Software ◽  
1990 ◽  
Vol 7 (2) ◽  
pp. 36-44 ◽  
Author(s):  
S. Henry ◽  
C. Selig

Sign in / Sign up

Export Citation Format

Share Document