The XML Expert's Path to Web Applications

Author(s):  
Anne Brüggemann-Klein

Web applications offer a golden opportunity for domain experts who work with XML documents to leverage their domain expertise, their knowledge of document engineering principles, and their skills in XML technology. Current XML technologies provide a full stack of modeling languages, implementation languages, and tools for Web applications that is stable, platform independent, and based on open standards. Combining principles and proven practices from document and software engineering, we identify architectures, modeling techniques, and implementation strategies that let end-user developers who are conversant with XML technologies create their own Web applications.

2019 ◽  
Vol 9 (21) ◽  
pp. 4553 ◽  
Author(s):  
Tomaž Kos ◽  
Marjan Mernik ◽  
Tomaž Kosar

End-user programming may utilize Domain-Specific Modeling Languages (DSMLs) to develop applications in the form of models, using only abstractions found in a specific problem domain. Indeed, the productivity benefits reported from Model-Driven Development (MDD) are hard to ignore, and a number of MDD solutions are flourishing. However, not all stories from industry on MDD are successful. End-users, without having software development skills, are more likely to introduce software errors than professional programmers. In this study, we propose and encourage other DSML developers to extend the development of DSML with tool support. We believe the programming tools (e.g., debugger, testing tool, refactoring tool) are also needed for end-users to ensure the proper functioning of the products they develop. It is imperative that domain experts are provided with tools that work on the abstraction level that is familiar to them. In this paper, an industrial experience is presented for building various tools for usage in MDD. Debugger, automated testing infrastructure, refactoring, and other tools were implemented for Sequencer, a DSML. Our experience with the implementation of tool support for MDD confirms that these tools are indispensable for end-user programming in practice, and that implementing those tools might not be as costly as expected.


Author(s):  
Martin O. Hofmann ◽  
Thomas L. Cost ◽  
Michael Whitley

The process of reviewing test data for anomalies after a firing of the Space Shuttle Main Engine (SSME) is a complex, time-consuming task. A project is under way to provide the team of SSME experts with a knowledge-based system to assist in the review and diagnosis task. A model-based approach was chosen because it can be adapted to changes in engine design, is easier to maintain, and can be explained more easily. A complex thermodynamic fluid system like the SSME introduces problems during modeling, analysis, and diagnosis which have as yet been insufficiently studied. We developed a qualitative constraint-based diagnostic system inspired by existing qualitative modeling and constraint-based reasoning methods which addresses these difficulties explicitly. Our approach combines various diagnostic paradigms seamlessly, such as the model-based and heuristic association-based paradigms, in order to better approximate the reasoning process of the domain experts. The end-user interface allows expert users to actively participate in the reasoning process, both by adding their own expertise and by guiding the diagnostic search performed by the system.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-25
Author(s):  
Michael Loster ◽  
Ioannis Koumarelas ◽  
Felix Naumann

The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of the same entity—duplicates—into a large homogeneous collection of data. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually design and configure such measures, which is both time-consuming and requires extensive domain expertise. We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We evaluated our method on multiple datasets and compare our approach to state-of-the-art deduplication methods. Our approach outperforms competitors by up to +26 percent F-measure, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in F-measure of up to +4.7 percent.


2021 ◽  
Vol 4 ◽  
pp. 1-8
Author(s):  
Jesse Friend ◽  
Mathias Jahnke ◽  
Niels Walen ◽  
Gernot Ramminger

Abstract. Web applications which are high functioning, efficient, and meet the performance demand of the client are essential in modern cartographic workflows. With more and more complex spatial data being integrated into web applications, such as time related features, it is essential to harmonize the means of data presentation so that the end product is aligned with the needs of the end-user. In this paper we present aWeb GIS application built as a microservice which displays various timeseries visualizations to the user to streamline intuitiveness and functionality. The prototype provides a solution which could help to understand various ways in which current web and spatial analysis methods can be combined to create visualizations that add value to existing spatial data for cartographic workflows.


Author(s):  
Amanda Galtman

Using XML as the source format for authoring technical publications creates opportunities to develop tools that provide analysis, author guidance, and visualization. This case study describes two web applications that take advantage of the XML source format of documents. The applications provide a browser-based tool for technical writers and editors in a 100-person documentation department of a software company. Compared to desktop tools, the web applications are more convenient for users and less affected by hard-to-predict inconsistencies among users' computers. One application analyzes file dependencies and produces custom reports that facilitate reorganizing files. The other helps authors visualize their network of topics in their documentation sets. Both applications rely on the XQuery language and its RESTXQ web API. The visualization application also uses JavaScript, including the powerful jQuery and D3 libraries. After discussing what the applications do and why, this paper describes some architectural highlights, including how the different technologies fit together and exchange data.


MIS Quarterly ◽  
2021 ◽  
Vol 45 (3) ◽  
pp. 1557-1580
Author(s):  
Elmira van den Broek ◽  
◽  
Anastasia Sergeeva ◽  
Marleen Huysman Vrije ◽  
◽  
...  

The introduction of machine learning (ML)in organizations comes with the claim that algorithms will produce insights superior to those of experts by discovering the “truth” from data. Such a claim gives rise to a tension between the need to produce knowledge independent of domain experts and the need to remain relevant to the domain the system serves. This two-year ethnographic study focuses on how developers managed this tension when building an ML system to support the process of hiring job candidates at a large international organization. Despite the initial goal of getting domain experts “out the loop,” we found that developers and experts arrived at a new hybrid practice that relied on a combination of ML and domain expertise. We explain this outcome as resulting from a process of mutual learning in which deep engagement with the technology triggered actors to reflect on how they produced knowledge. These reflections prompted the developers to iterate between excluding domain expertise from the ML system and including it. Contrary to common views that imply an opposition between ML and domain expertise, our study foregrounds their interdependence and as such shows the dialectic nature of developing ML. We discuss the theoretical implications of these findings for the literature on information technologies and knowledge work, information system development and implementation, and human–ML hybrids.


Author(s):  
Kyoungho An ◽  
Adam Trewyn ◽  
Aniruddha Gokhale ◽  
Shivakumar Sastry

Much of the existing literature on domain-specific modeling languages (DSMLs) focuses on either the DSML design and their use in developing complex software systems (e.g., in enterprise and web applications), or their use in physical systems (e.g., process control). With increasing focus on research and development of cyber-physical systems (CPS) such as autonomous automotive systems and process control systems, which are systems that tightly integrate cyber and physical artifacts, it becomes important to understand the need for and the roles played by DSMLs for such systems. One use of DSMLs for CPS systems is in the analysis and verification of different properties of the system. Many questions arise in this context: How are the cyber and physical artifacts represented in DSMLs? How can these DSMLs be used in analysis? This book chapter addresses these questions through a case study of reconfigurable conveyor systems used as a representative example.


Author(s):  
Janina Fengel

Business process modeling has become an accepted means for designing and describing business operations. However, due to dissimilar utilization of modeling languages and, even more importantly, the natural language for labeling model elements, models can differ. As a result, comparisons are a non-trivial task that is presently to be performed manually. Thereby, one of the major challenges is the alignment of the business semantics contained, which is an indispensable pre-requisite for structural comparisons. For easing this workload, the authors present a novel approach for aligning business process models semantically in an automated manner. Semantic matching is enabled through a combination of ontology matching and information linguistics processing techniques. This provides for a heuristic to support domain experts in identifying similarities or discrepancies.


Author(s):  
Bart-Jan Hommes

Meta-modeling is a well-known approach for capturing modeling methods and techniques. A meta-model can serve as a basis for quantitative evaluation of methods and techniques. By means of a number of formal metrics based on the meta-model, a quantitative evaluation of methods and techniques becomes possible. Existing meta-modeling languages and measurement schemes do not allow the explicit modeling of so-called multi-modeling techniques. Multi-modeling techniques are techniques that offer a coherent set of aspect modeling techniques to model different aspects of a certain phenomenon. As a consequence, existing approaches lack metrics to quantitatively assess aspects that are particular to multi-modeling techniques. In this chapter, a modeling language for modeling multi-modeling techniques is proposed as well as metrics for evaluating the coherent set of aspect modeling techniques that constitute the multi-modeling technique.


Author(s):  
Joe Tekli

W3C’s XML (eXtensible Mark-up Language) has recently gained unparalleled importance as a fundamental standard for efficient data management and exchange. The use of XML covers data representation and storage, database information interchange, data filtering, as well as Web applications interaction and interoperability. XML has been intensively exploited in the multimedia field as an effective and standard means for indexing, storing, and retrieving complex multimedia objects. SVG1, SMIL2, X3D3 and MPEG-74 are only some examples of XML-based multimedia data representations. With the ever-increasing Web exploitation of XML, there is an emergent need to automatically process XML documents and grammars for similarity classification and clustering, information extraction, and search functions. All these applications require some notion of structural similarity, XML representing semi-structured data. In this area, most work has focused on estimating similarity between XML documents (i.e., data layer). Nonetheless, few efforts have been dedicated to comparing XML grammars (i.e., type layer). Computing the structural similarity between XML documents is relevant in several scenarios such as change management (Chawathe, Rajaraman, Garcia- Molina, & Widom, 1996; Cobéna, Abiteboul, & Marian, 2002), XML structural query systems (finding and ranking results according to their similarity) (Schlieder, 2001; Zhang, Li, Cao, & Zhu, 2003) as well as the structural clustering of XML documents gathered from the Web (Dalamagas, Cheng, Winkel, & Sellis, 2006; Nierman & Jagadish, 2002). On the other hand, estimating similarity between XML grammars is useful for data integration purposes, in particular the integration of DTDs/schemas that contain nearly or exactly the same information but are constructed using different structures (Doan, Domingos, & Halevy, 2001; Melnik, Garcia-Molina, & Rahm, 2002). It is also exploited in data warehousing (mapping data sources to warehouse schemas) as well as XML data maintenance and schema evolution where we need to detect differences/updates between different versions of a given grammar/schema to consequently revalidate corresponding XML documents (Rahm & Bernstein, 2001). The goal of this article is to briefly review XML grammar structural similarity approaches. Here, we provide a unified view of the problem, assessing the different aspects and techniques related to XML grammar comparison. The remainder of this article is organized as follows. The second section presents an overview of XML grammar similarity, otherwise known as XML schema matching. The third section reviews the state of the art in XML grammar comparison methods. The fourth section discusses the main criterions characterizing the effectiveness of XML grammar similarity approaches. Conclusions and current research directions are covered in the last section.


Sign in / Sign up

Export Citation Format

Share Document