scholarly journals Extremo: An Eclipse plugin for modelling and meta-modelling assistance

2019 ◽  
Author(s):  
Ángel Mora-Segura

Modelling is a core activity in software development paradigms like Model-driven Engineering (MDE). Therefore, the quality of (meta-)models is crucial for the success of software projects. However, many times, modelling becomes a purely manual activity, which does not take advantage of information embedded in heterogeneous information sources, such as XML documents, ontologies, or other models and meta-models. In order to improve this situation, we present Extremo, an Eclipse plugin aimed at gathering the information stored in heterogeneous sources in a common data model, to facilitate the reuse of information chunks in the model being built. The tool covers the steps needed to incorporate this knowledge within an external modelling tool, supporting the uniform query of the heterogeneous sources and the evaluation of constraints. Flexibility of the main features (e.g., supported data formats, queries) is achieved by means of extensible mechanisms. To illustrate the usefulness of Extremo, we describe a practical case study in the financial domain and evaluate its performance and scalability.

2019 ◽  
Author(s):  
Ángel Mora-Segura

Model-Driven Engineering (MDE) uses models as its main assets in the software development process. The structure of a model is described through a meta-model. Even though modelling and meta-modelling are recurrent activities in MDE and a vast amount of MDE tools exist nowadays, they are tasks typically performed in an unassisted way. Usually, these tools cannot extract useful knowledge available in heterogeneous information sources like XML, RDF, CSV or other models and meta-models.We propose an approach to provide modelling and meta-modelling assistance. The approach gathers heterogeneous information sources in various technological spaces, and represents them uniformly in a common data model. This enables their uniform querying, by means of an extensible mechanism, which can make use of services, e.g., for synonym search and word sense analysis. The query results can then be easily incorporated into the (meta-)model being built. The approach has been realized in the Extremo tool, developed as an Eclipse plugin. Extremo has been validated in the context of two domains – production systems and process modelling – taking into account a large and complex industrial standard for classification and product description. Further validation results indicate that the integration of Extremo in various modelling environments can be achieved with low effort, and that the tool is able to handle information from most existing technological spaces.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Ao Luo ◽  
Yongfeng Yang

In response to the relatively huge error, low efficiency, and low accuracy in the prediction and analysis data on the quality of multimedia professional talents at present, an analytical method for the prediction of the quality of multimedia professional talents based on the multiobjective data fuzzy evolutionary algorithm is proposed in this paper. The weights of the talent quality data are analyzed in detail mainly based on the multiobjective data fuzzy evolution algorithm to establish a prediction and analysis model for the quality of multimedia professional talents. This model can be used to implement the prediction and analysis of the comprehensive abilities of multimedia professional talents effectively. Finally, through a practical case study, it is verified that the method put forward in this paper has high efficiency and high accuracy in predicting and analyzing the quality of multimedia professional talents.


Author(s):  
Guadalupe Ortiz ◽  
Behzad Bordbar

The presented approach draws on two main software techniques: Model-Driven Architecture, and aspect-oriented programming. The method involves modeling of the Quality of Service and Extra-functional properties in a platform-independent fashion. Then applying model transformation, the platform-independent models are transformed into platform-specific models, and finally into code. The code for Quality of Service and Extra-functional properties are integrated into the system relying on aspect-oriented techniques in a decoupled manner. The presented approach is evaluated with the help of a case study to establish that the approach results in increasing the system’s modularity and thus reducing implementation and maintenance costs.


2014 ◽  
Vol 52 ◽  
Author(s):  
Daniel Acton ◽  
Derrick Kourie ◽  
Bruce Watson

As long as software has been produced, there have been efforts to strive for quality in software products. In order to understand quality in software products, researchers have built models of software quality that rely on metrics in an attempt to provide a quantitative view of software quality. The aim of these models is to provide software producers with the capability to define and evaluate metrics related to quality and use these metrics to improve the quality of the software they produce over time. The main disadvantage of these models is that they require effort and resources to define and evaluate metrics from software projects. This article briefly describes some prominent models of software quality in the literature and continues to describe a new approach to gaining insight into quality in software development projects. A case study based on this new approach is described and results from the case study are discussed.


Author(s):  
Maciej Łabędzki ◽  
Patryk Promiński ◽  
Adam Rybicki ◽  
Marcin Wolski

Aim:Aim: The purpose of this paper is to identify common mistakes and pitfalls as well as best practices in estimating labor intensity in software projects. The quality of estimations in less experienced teams is often unsatisfactory, as a result of which estimation as part of the software development process is abandoned. The decision is usually justified by misunderstanding "agility". This article is part of the discussion on current trends in estimation, especially in the context of the new "no estimates" approach.Design / Research methods: The publication is a case study based on the experience of a mature development team. The author, on the basis of literature-based estimation techniques, shows good and bad practices, as well as common mistakes in thinking and behavior.Conclusions / findings: The key to correct estimation is: understanding the difference between labor intensity and time, ability to monitor performance, as well as how to analyze staff requirements for the team.Originality / value of the article: The publication helps to master confidence-boosting techniques for any estimation (duration, and indirectly, the cost of software development) where requirements are known, but mainly at the stage of project implementation (design and implementation).Limitations of the research: The work does not address the problems of initial estimation of projects, i.e. the estimation made in the early stages of planning.


2017 ◽  
Vol 17 (02) ◽  
pp. e17 ◽  
Author(s):  
Alejandro Sanchez ◽  
Leonardo Ordinez ◽  
Sergio Firmenich ◽  
Damián Barry ◽  
Rodrigo Santos

This article presents design decisions for an ontologybased framework supporting an expert-driven approach to the collaborative acquisition, integration and analysis of information. It describes a three-tier organization in which components facilitate establishing an information system to assist domain experts in addressing multi-causal dynamic situations where heterogeneous information sources must be integrated and the interaction of a variety of actors must be supported in large geographical areas with possibly low or intermittent connectivity levels. The approach and envisioned framework are illustrated with a case-study in the design of bikeways in urban zones.


Author(s):  
Jules White ◽  
Brian Dougherty

Product-line architectures (PLAs) are a paradigm for developing software families by customizing and composing reusable artifacts, rather than handcrafting software from scratch. Extensive testing is required to develop reliable PLAs, which may have scores of valid variants that can be constructed from the architecture’s components. It is crucial that each variant be tested thoroughly to assure the quality of these applications on multiple platforms and hardware configurations. It is tedious and error-prone, however, to setup numerous distributed test environments manually and ensure they are deployed and configured correctly. To simplify and automate this process, the authors present a model-driven architecture (MDA) technique that can be used to (1) model a PLA’s configuration space, (2) automatically derive configurations to test, and (3) automate the packaging, deployment, and testing of con-figurations. To validate this MDA process, the authors use a distributed constraint optimization system case study to quantify the cost savings of using an MDA approach for the deployment and testing of PLAs.


2009 ◽  
pp. 3455-3488
Author(s):  
Tom Mens ◽  
Gabriele Taentzer ◽  
Dirk Müller

In this chapter, we explore the emerging research domain of model-driven software refactoring. Program refactoring is a proven technique that aims at improving the quality of source code. Applying refactoring in a model-driven software engineering context raises many new challenges such as how to define, detect and improve model quality, how to preserve model behavior, and so on. Based on a concrete case study with a state-ofthe- art model-driven software development tool, AndroMDA, we explore some of these challenges in more detail. We propose to resolve some of the encountered problems by relying on wellunderstood techniques of meta-modeling, model transformation and graph transformation.


Author(s):  
Amine Azzaoui ◽  
Ouzayr Rabhi ◽  
Ayyoub Mani

Over the past decade, the concept of data warehousing has been widely accepted. The main reason for building data warehouses is to improve the quality of information in order to achieve specific business objectives such as competitive advantage or improved decision-making. However, there is no formal method for deriving a multidimensional schema from heterogeneous databases that is recognized as a standard by the OMG and the professionals of the field. Which is why, in this paper, we present a model-driven approach (MDA) for the design of data warehouses. To apply the MDA approach to the Data warehouse construction process, we describe a multidimensional meta-model and specify a set of transformations from a UML meta-model which is mapped to a multidimensional meta-model. The transformation rules are programmed by the Query View Transformation (QVT) language. A case study illustrates our approach. It demonstrates how it reinforces the components traceability and reusability and how it globally improves the modeler’s efficiency. Furthermore, the use of the UML, as a technique to build data warehouses, is an important facilitator which prepares our further work to automate this approach.


Author(s):  
Tom Mens ◽  
Gabriele Taentzer ◽  
Dirk Müller

In this chapter, we explore the emerging research domain of model-driven software refactoring. Program refactoring is a proven technique that aims at improving the quality of source code. Applying refactoring in a model-driven software engineering context raises many new challenges such as how to define, detect and improve model quality, how to preserve model behavior, and so on. Based on a concrete case study with a state-of-the-art model-driven software development tool, AndroMDA, we explore some of these challenges in more detail. We propose to resolve some of the encountered problems by relying on well-understood techniques of meta-modeling, model transformation and graph transformation.


Sign in / Sign up

Export Citation Format

Share Document