scholarly journals Unsupervised learning of general-purpose embeddings for code changes

2021 ◽  
Author(s):  
Mikhail Pravilov ◽  
Egor Bogomolov ◽  
Yaroslav Golubev ◽  
Timofey Bryksin
Author(s):  
Stefan Höppner ◽  
Timo Kehrer ◽  
Matthias Tichy

AbstractModel transformations are among the key concepts of model-driven engineering (MDE), and dedicated model transformation languages (MTLs) emerged with the popularity of the MDE pssaradigm about 15 to 20 years ago. MTLs claim to increase the ease of development of model transformations by abstracting from recurring transformation aspects and hiding complex semantics behind a simple and intuitive syntax. Nonetheless, MTLs are rarely adopted in practice, there is still no empirical evidence for the claim of easier development, and the argument of abstraction deserves a fresh look in the light of modern general purpose languages (GPLs) which have undergone a significant evolution in the last two decades. In this paper, we report about a study in which we compare the complexity and size of model transformations written in three different languages, namely (i) the Atlas Transformation Language (ATL), (ii) Java SE5 (2004–2009), and (iii) Java SE14 (2020); the Java transformations are derived from an ATL specification using a translation schema we developed for our study. In a nutshell, we found that some of the new features in Java SE14 compared to Java SE5 help to significantly reduce the complexity of transformations written in Java by as much as 45%. At the same time, however, the relative amount of complexity that stems from aspects that ATL can hide from the developer, which is about 40% of the total complexity, stays about the same. Furthermore we discovered that while transformation code in Java SE14 requires up to 25% less lines of code, the number of words written in both versions stays about the same. And while the written number of words stays about the same their distribution throughout the code changes significantly. Based on these results, we discuss the concrete advancements in newer Java versions. We also discuss to which extent new language advancements justify writing transformations in a general purpose language rather than a dedicated transformation language. We further indicate potential avenues for future research on the comparison of MTLs and GPLs in a model transformation context.


2021 ◽  
Vol 26 (5) ◽  
Author(s):  
Maria Ulan ◽  
Welf Löwe ◽  
Morgan Ericsson ◽  
Anna Wingkvist

AbstractIt is a well-known practice in software engineering to aggregate software metrics to assess software artifacts for various purposes, such as their maintainability or their proneness to contain bugs. For different purposes, different metrics might be relevant. However, weighting these software metrics according to their contribution to the respective purpose is a challenging task. Manual approaches based on experts do not scale with the number of metrics. Also, experts get confused if the metrics are not independent, which is rarely the case. Automated approaches based on supervised learning require reliable and generalizable training data, a ground truth, which is rarely available. We propose an automated approach to weighted metrics aggregation that is based on unsupervised learning. It sets metrics scores and their weights based on probability theory and aggregates them. To evaluate the effectiveness, we conducted two empirical studies on defect prediction, one on ca. 200 000 code changes, and another ca. 5 000 software classes. The results show that our approach can be used as an agnostic unsupervised predictor in the absence of a ground truth.


Author(s):  
Luisa Lugli ◽  
Stefania D’Ascenzo ◽  
Roberto Nicoletti ◽  
Carlo Umiltà

Abstract. The Simon effect lies on the automatic generation of a stimulus spatial code, which, however, is not relevant for performing the task. Results typically show faster performance when stimulus and response locations correspond, rather than when they do not. Considering reaction time distributions, two types of Simon effect have been individuated, which are thought to depend on different mechanisms: visuomotor activation versus cognitive translation of spatial codes. The present study aimed to investigate whether the presence of a distractor, which affects the allocation of attentional resources and, thus, the time needed to generate the spatial code, changes the nature of the Simon effect. In four experiments, we manipulated the presence and the characteristics of the distractor. Findings extend previous evidence regarding the distinction between visuomotor activation and cognitive translation of spatial stimulus codes in a Simon task. They are discussed with reference to the attentional model of the Simon effect.


2018 ◽  
Author(s):  
Antonio E. Puente ◽  
Neil H. Pliskin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document