Reverse Modeling for Domain-Driven Engineering of Publishing Technology

Author(s):  
Anne Brüggemann-Klein ◽  
Tamer Demirel ◽  
Dennis Pagano ◽  
Andreas Tai

We report in this paper on a technique that we call reverse modeling. Reverse modeling starts with a conceptual model that is formulated in one or more generic modeling technologies such as UML or XML Schema. It abstracts from that model a custom, domain-specific meta-model and re-formulates the original model as an instance of the new meta-model. We demonstrate the value of reverse modeling with two case studies: One domain-specific meta-model facilitates design and user interface of a so-called instance generator for broadcasting productions metadata. Another one structures the translation of XML-encoded printer data for invoices into semantic XML. In a further section of this paper, we take a more general view and survey patterns that have evolved in the conceptual modeling of documents and data and that implicitly suggest sound meta-modeling constructs. Taken together, the two case studies and the survey of patterns in conceptual models bring us one step closer to our superior goal of developing a meta-meta-modeling facility whose instances are custom meta-models for conceptual document and data models. The research that is presented in this paper brings forward a core set of elementary constructors that a meta-meta-modeling facility should provide.

2020 ◽  
Vol 10 (7) ◽  
pp. 2306 ◽  
Author(s):  
Andrea Vázquez-Ingelmo ◽  
Francisco José García-Peñalvo ◽  
Roberto Therón ◽  
Miguel Ángel Conde

Information dashboards are everywhere. They support knowledge discovery in a huge variety of contexts and domains. Although powerful, these tools can be complex, not only for the end-users but also for developers and designers. Information dashboards encode complex datasets into different visual marks to ease knowledge discovery. Choosing a wrong design could compromise the entire dashboard’s effectiveness, selecting the appropriate encoding or configuration for each potential context, user, or data domain is a crucial task. For these reasons, there is a necessity to automatize the recommendation of visualizations and dashboard configurations to deliver tools adapted to their context. Recommendations can be based on different aspects, such as user characteristics, the data domain, or the goals and tasks that will be achieved or carried out through the visualizations. This work presents a dashboard meta-model that abstracts all these factors and the integration of a visualization task taxonomy to account for the different actions that can be performed with information dashboards. This meta-model has been used to design a domain specific language to specify dashboards requirements in a structured way. The ultimate goal is to obtain a dashboard generation pipeline to deliver dashboards adapted to any context, such as the educational context, in which a lot of data are generated, and there are several actors involved (students, teachers, managers, etc.) that would want to reach different insights regarding their learning performance or learning methodologies.


Author(s):  
Edgars Rencis ◽  
Janis Barzdins ◽  
Sergejs Kozlovics

Towards Open Graphical Tool-Building Framework Nowadays, there are many frameworks for developing domain-specific tools. However, if we want to create a really sophisticated tool with specific functionality requirements, it is not always an easy task to do. Although tool-building platforms offer some means for extending the tool functionality and accessing it from external applications, it usually requires a deep understanding of various technical implementation details. In this paper we try to go one step closer to a really open graphical tool-building framework that would allow both to change the behavior of the tool and to access the tool from the outside easily. We start by defining a specialization of metamodels which is a great and powerful facility itself. Then we go on and show how this can be applied in the field of graphical domain-specific tool building. The approach is demonstrated on an example of a subset of UML activity diagrams. The benefits of the approach are also clearly indicated. These include a natural and intuitive definition of tools, a strict logic/presentation separation and the openness for extensions as well as for external applications.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-27
Author(s):  
Morteza Hosseini ◽  
Tinoosh Mohsenin

This article presents a low-power, programmable, domain-specific manycore accelerator, Binarized neural Network Manycore Accelerator (BiNMAC), which adopts and efficiently executes binary precision weight/activation neural network models. Such networks have compact models in which weights are constrained to only 1 bit and can be packed several in one memory entry that minimizes memory footprint to its finest. Packing weights also facilitates executing single instruction, multiple data with simple circuitry that allows maximizing performance and efficiency. The proposed BiNMAC has light-weight cores that support domain-specific instructions, and a router-based memory access architecture that helps with efficient implementation of layers in binary precision weight/activation neural networks of proper size. With only 3.73% and 1.98% area and average power overhead, respectively, novel instructions such as Combined Population-Count-XNOR , Patch-Select , and Bit-based Accumulation are added to the instruction set architecture of the BiNMAC, each of which replaces execution cycles of frequently used functions with 1 clock cycle that otherwise would have taken 54, 4, and 3 clock cycles, respectively. Additionally, customized logic is added to every core to transpose 16×16-bit blocks of memory on a bit-level basis, that expedites reshaping intermediate data to be well-aligned for bitwise operations. A 64-cluster architecture of the BiNMAC is fully placed and routed in 65-nm TSMC CMOS technology, where a single cluster occupies an area of 0.53 mm 2 with an average power of 232 mW at 1-GHz clock frequency and 1.1 V. The 64-cluster architecture takes 36.5 mm 2 area and, if fully exploited, consumes a total power of 16.4 W and can perform 1,360 Giga Operations Per Second (GOPS) while providing full programmability. To demonstrate its scalability, four binarized case studies including ResNet-20 and LeNet-5 for high-performance image classification, as well as a ConvNet and a multilayer perceptron for low-power physiological applications were implemented on BiNMAC. The implementation results indicate that the population-count instruction alone can expedite the performance by approximately 5×. When other new instructions are added to a RISC machine with existing population-count instruction, the performance is increased by 58% on average. To compare the performance of the BiNMAC with other commercial-off-the-shelf platforms, the case studies with their double-precision floating-point models are also implemented on the NVIDIA Jetson TX2 SoC (CPU+GPU). The results indicate that, within a margin of ∼2.1%--9.5% accuracy loss, BiNMAC on average outperforms the TX2 GPU by approximately 1.9× (or 7.5× with fabrication technology scaled) in energy consumption for image classification applications. On low power settings and within a margin of ∼3.7%--5.5% accuracy loss compared to ARM Cortex-A57 CPU implementation, BiNMAC is roughly ∼9.7×--17.2× (or 38.8×--68.8× with fabrication technology scaled) more energy efficient for physiological applications while meeting the application deadline.


Author(s):  
MIN DENG ◽  
R. E. K. STIREWALT ◽  
BETTY H. C. CHENG

Recently, there has been growing interest in formalizing UML, thereby enabling rigorous analysis of its many graphical diagrams. Two obstacles currently limit the adoption and use of UML formalizations in practice. First is the need to verify the consistency of artifacts under formalization. Second is the need to validate formalization approaches against domain-specific requirements. Techniques from the emerging field of requirements traceability hold promise for addressing these obstacles. This paper contributes a technique called retrieval by construction (RBC), which establishes traceability links between a UML model and a target model intended to denote its semantics under formalization. RBC provides an approach for structuring and representing the complex one-to-many links that are common between UML and target models under formalization. RBC also uses the notion of value identity in a novel way that enables the specification of the link-retrieval criteria using generative procedures. These procedures are a natural means for specifying UML formalizations. We have validated the RBC technique in a tool framework called UBanyan, written in C++. We applied the tool to three case studies, one of which was obtained from the industry. We have also assessed our results using the two well-known traceability metrics: precision and recall. Preliminary investigations suggest that RBC can be a useful traceability technique for validating and verifying UML formalizations.


TERRITORIO ◽  
2012 ◽  
pp. 107-111
Author(s):  
Vitaliano Tosoni

Research activities are trying to address the issues of the recovery, redevelopment and enhancement of the buildings of the Tor Bella Monica neighbourhood through the formulation of a set of operations designed to achieve the social, cultural and architectural promotion of these buildings. By looking at the technical limitations resulting from the heavy prefabricated methods used to build them and also through reference to national and international case studies, a picture was constructed of possible types of action to take as an initial core set of operations designed to support the design process through graphic simulations, the indication of operational areas and the magnitude of the intervention proposed.


2005 ◽  
Vol 19 (2) ◽  
pp. 57-77 ◽  
Author(s):  
Gregory J. Gerard

Most database textbooks on conceptual modeling do not cover domainspecific patterns. The texts emphasize notation, apparently assuming that notation enables individuals to correctly model domain-specific knowledge acquired from experience. However, the domain knowledge acquired may not aid in the construction of conceptual models if it is not structured to support conceptual modeling. This study uses the Resources Events Agents (REA) pattern as an example of a domain-specific pattern that can be encoded as a knowledge structure for conceptual modeling of accounting information systems (AIS), and tests its effects on the accuracy of conceptual modeling in a familiar business setting. Fifty-three undergraduate and forty-six graduate students completed recall tasks designed to measure REA knowledge structure. The accuracy of participants' conceptual models was positively related to REA knowledge structure. Results suggest it is insufficient to know only conceptual modeling notation because structured knowledge of domain-specific patterns reduces design errors.


Author(s):  
Bart-Jan Hommes

Meta-modeling is a well-known approach for capturing modeling methods and techniques. A meta-model can serve as a basis for quantitative evaluation of methods and techniques. By means of a number of formal metrics based on the meta-model, a quantitative evaluation of methods and techniques becomes possible. Existing meta-modeling languages and measurement schemes do not allow the explicit modeling of so-called multi-modeling techniques. Multi-modeling techniques are techniques that offer a coherent set of aspect modeling techniques to model different aspects of a certain phenomenon. As a consequence, existing approaches lack metrics to quantitatively assess aspects that are particular to multi-modeling techniques. In this chapter, a modeling language for modeling multi-modeling techniques is proposed as well as metrics for evaluating the coherent set of aspect modeling techniques that constitute the multi-modeling technique.


Author(s):  
Giancarlo Guizzardi ◽  
Gerd Wagner

Foundational ontologies provide the basic concepts upon which any domain-specific ontology is built. This chapter presents a new foundational ontology, UFO, and shows how it can be used as a guideline in business modeling and for evaluating business modeling methods. UFO is derived from a synthesis of two other foundational ontologies, GFO/GOL and OntoClean/DOLCE. While their main areas of application are natural sciences and linguistics/cognitive engineering, respectively, the main purpose of UFO is to provide a foundation for conceptual modeling, including business modeling.


Sign in / Sign up

Export Citation Format

Share Document