model driven engineering
Recently Published Documents


TOTAL DOCUMENTS

774
(FIVE YEARS 149)

H-INDEX

30
(FIVE YEARS 3)

Author(s):  
Davide Di Ruscio ◽  
Dimitris Kolovos ◽  
Juan de Lara ◽  
Alfonso Pierantonio ◽  
Massimo Tisi ◽  
...  

AbstractThe last few years have witnessed a significant growth of so-called low-code development platforms (LCDPs) both in gaining traction on the market and attracting interest from academia. LCDPs are advertised as visual development platforms, typically running on the cloud, reducing the need for manual coding and also targeting non-professional programmers. Since LCDPs share many of the goals and features of model-driven engineering approaches, it is a common point of debate whether low-code is just a new buzzword for model-driven technologies, or whether the two terms refer to genuinely distinct approaches. To contribute to this discussion, in this expert-voice paper, we compare and contrast low-code and model-driven approaches, identifying their differences and commonalities, analysing their strong and weak points, and proposing directions for cross-pollination.


2022 ◽  
pp. 1586-1611
Author(s):  
Alexandre Bragança ◽  
Isabel Azevedo ◽  
Nuno Bettencourt

Model-driven engineering (MDE) is an approach to software engineering that adopts models as the central artefact. Although the approach is promising in addressing major issues in software development, particularly in dealing with software complexity, and there are several success cases in the industry as well as growing interest in the research community, it seems that it has been hard to generalize its gains among software professionals. To address this issue, MDE must be taught at a higher-education level. This chapter presents a three-year experience in teaching MDE in a course of a master program in informatics engineering. The chapter provides details on how a project-based learning approach was adopted and evolved along three editions of the course. Results of a student survey are discussed and compared to those from another course. In addition, several other similar teaching experiences are analyzed.


Author(s):  
Stefan Höppner ◽  
Timo Kehrer ◽  
Matthias Tichy

AbstractModel transformations are among the key concepts of model-driven engineering (MDE), and dedicated model transformation languages (MTLs) emerged with the popularity of the MDE pssaradigm about 15 to 20 years ago. MTLs claim to increase the ease of development of model transformations by abstracting from recurring transformation aspects and hiding complex semantics behind a simple and intuitive syntax. Nonetheless, MTLs are rarely adopted in practice, there is still no empirical evidence for the claim of easier development, and the argument of abstraction deserves a fresh look in the light of modern general purpose languages (GPLs) which have undergone a significant evolution in the last two decades. In this paper, we report about a study in which we compare the complexity and size of model transformations written in three different languages, namely (i) the Atlas Transformation Language (ATL), (ii) Java SE5 (2004–2009), and (iii) Java SE14 (2020); the Java transformations are derived from an ATL specification using a translation schema we developed for our study. In a nutshell, we found that some of the new features in Java SE14 compared to Java SE5 help to significantly reduce the complexity of transformations written in Java by as much as 45%. At the same time, however, the relative amount of complexity that stems from aspects that ATL can hide from the developer, which is about 40% of the total complexity, stays about the same. Furthermore we discovered that while transformation code in Java SE14 requires up to 25% less lines of code, the number of words written in both versions stays about the same. And while the written number of words stays about the same their distribution throughout the code changes significantly. Based on these results, we discuss the concrete advancements in newer Java versions. We also discuss to which extent new language advancements justify writing transformations in a general purpose language rather than a dedicated transformation language. We further indicate potential avenues for future research on the comparison of MTLs and GPLs in a model transformation context.


Modelling ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 609-625
Author(s):  
Eugene Syrian ◽  
Daniel Riegelhaupt ◽  
Bruno Barroca ◽  
Istvan David

Textual editors are omnipresent in all software tools. Editors provide basic features, such as copy-pasting and searching, or more advanced features, such as error checking and text completion. Current technologies in model-driven engineering can automatically generate textual editors to manipulate domain-specific languages (DSLs). However, the customization and addition of new features to these editors is often limited to changing the internal structure and behavior. In this paper, we explore a new generation of self-descriptive textual editors for DSLs, allowing full configuration of their structure and behavior in a convenient formalism, rather than in source code. We demonstrate the feasibility of the approach by providing a prototype implementation and applying it in two domain-specific modeling scenarios, including one in architecture modeling.


Author(s):  
José Antonio Hernández López ◽  
Javier Luis Cánovas Izquierdo ◽  
Jesús Sánchez Cuadrado

AbstractThe application of machine learning (ML) algorithms to address problems related to model-driven engineering (MDE) is currently hindered by the lack of curated datasets of software models. There are several reasons for this, including the lack of large collections of good quality models, the difficulty to label models due to the required domain expertise, and the relative immaturity of the application of ML to MDE. In this work, we present ModelSet, a labelled dataset of software models intended to enable the application of ML to address software modelling problems. To create it we have devised a method designed to facilitate the exploration and labelling of model datasets by interactively grouping similar models using off-the-shelf technologies like a search engine. We have built an Eclipse plug-in to support the labelling process, which we have used to label 5,466 Ecore meta-models and 5,120 UML models with its category as the main label plus additional secondary labels of interest. We have evaluated the ability of our labelling method to create meaningful groups of models in order to speed up the process, improving the effectiveness of classical clustering methods. We showcase the usefulness of the dataset by applying it in a real scenario: enhancing the MAR search engine. We use ModelSet to train models able to infer useful metadata to navigate search results. The dataset and the tooling are available at https://figshare.com/s/5a6c02fa8ed20782935c and a live version at http://modelset.github.io.


Sign in / Sign up

Export Citation Format

Share Document