scholarly journals A semi-automated BPMN-based framework for detecting conflicts between security, data-minimization, and fairness requirements

2020 ◽  
Vol 19 (5) ◽  
pp. 1191-1227 ◽  
Author(s):  
Qusai Ramadan ◽  
Daniel Strüber ◽  
Mattia Salnitri ◽  
Jan Jürjens ◽  
Volker Riediger ◽  
...  

Abstract Requirements are inherently prone to conflicts. Security, data-minimization, and fairness requirements are no exception. Importantly, undetected conflicts between such requirements can lead to severe effects, including privacy infringement and legal sanctions. Detecting conflicts between security, data-minimization, and fairness requirements is a challenging task, as such conflicts are context-specific and their detection requires a thorough understanding of the underlying business processes. For example, a process may require anonymous execution of a task that writes data into a secure data storage, where the identity of the writer is needed for the purpose of accountability. Moreover, conflicts not arise from trade-offs between requirements elicited from the stakeholders, but also from misinterpretation of elicited requirements while implementing them in business processes, leading to a non-alignment between the data subjects’ requirements and their specifications. Both types of conflicts are substantial challenges for conflict detection. To address these challenges, we propose a BPMN-based framework that supports: (i) the design of business processes considering security, data-minimization and fairness requirements, (ii) the encoding of such requirements as reusable, domain-specific patterns, (iii) the checking of alignment between the encoded requirements and annotated BPMN models based on these patterns, and (iv) the detection of conflicts between the specified requirements in the BPMN models based on a catalog of domain-independent anti-patterns. The security requirements were reused from SecBPMN2, a security-oriented BPMN 2.0 extension, while the fairness and data-minimization parts are new. For formulating our patterns and anti-patterns, we extended a graphical query language called SecBPMN2-Q. We report on the feasibility and the usability of our approach based on a case study featuring a healthcare management system, and an experimental user study.

2019 ◽  
Vol 13 (02) ◽  
pp. 207-227 ◽  
Author(s):  
Norman Köster ◽  
Sebastian Wrede ◽  
Philipp Cimiano

Efficient storage and querying of long-term human–robot interaction data requires application developers to have an in-depth understanding of the involved domains. Creating syntactically and semantically correct queries in the development process is an error prone task which can immensely impact the interaction experience of humans with robots and artificial agents. To address this issue, we present and evaluate a model-driven software development approach to create a long-term storage system to be used in highly interactive HRI scenarios. We created multiple domain-specific languages that allow us to model the domain and seamlessly embed its concepts into a query language. Along with corresponding model-to-model and model-to-text transformations, we generate a fully integrated workbench facilitating data storage and retrieval. It supports developers in the query design process and allows in-tool query execution without the need to have prior in-depth knowledge of the domain. We evaluated our work in an extensive user study and can show that the generated tool yields multiple advantages compared to the usual query design approach.


Author(s):  
Christin Katharina Kreutz ◽  
Michael Wolz ◽  
Jascha Knack ◽  
Benjamin Weyers ◽  
Ralf Schenkel

AbstractInformation access to bibliographic metadata needs to be uncomplicated, as users may not benefit from complex and potentially richer data that may be difficult to obtain. Sophisticated research questions including complex aggregations could be answered with complex SQL queries. However, this comes with the cost of high complexity, which requires for a high level of expertise even for trained programmers. A domain-specific query language could provide a straightforward solution to this problem. Although less generic, it can support users not familiar with query construction in the formulation of complex information needs. In this paper, we present and evaluate SchenQL, a simple and applicable query language that is accompanied by a prototypical GUI. SchenQL focuses on querying bibliographic metadata using the vocabulary of domain experts. The easy-to-learn domain-specific query language is suitable for domain experts as well as casual users while still providing the possibility to answer complex information demands. Query construction and information exploration are supported by a prototypical GUI. We present an evaluation of the complete system: different variants for executing SchenQL queries are benchmarked; interviews with domain-experts and a bipartite quantitative user study demonstrate SchenQL’s suitability and high level of users’ acceptance.


2019 ◽  
Vol 41 (3) ◽  
pp. 404-419
Author(s):  
Caitlin Blaser Mapitsa ◽  
Tara Polzer Ngwato

As global discussions of evaluation standards become more contextually nuanced, culturally responsive conceptions of ethics have not been sufficiently discussed. In academic social research, ethical clearance processes have been designed to protect vulnerable people from harm related to participation in a research project. This article expands the ambit of ethical protection thinking and proposes a relational ethics approach for evaluation practitioners. This centers an analysis of power relations among and within all the different stakeholder groups in order to establish, in a context-specific manner, which stakeholders are vulnerable and in need of protection. The approach also contextualizes the nature of “the public good,” as part of an ethical consideration of interest trade-offs during evaluations. The discussion is informed by our experiences in African contexts and speaks to the “Made in Africa” research agenda but is also relevant to other global contexts where alternatives to “developed country” ontological assumptions about the roles of researchers and participations and the nature of vulnerability are being reconsidered.


2013 ◽  
Vol 10 (4) ◽  
pp. 1585-1620 ◽  
Author(s):  
Verislav Djukic ◽  
Ivan Lukovic ◽  
Aleksandar Popovic ◽  
Vladimir Ivancevic

In this paper, we present an approach to development and application of domain-specific modeling (DSM) tools in the model-based management of business processes. The level of Model-to-Text (M2T) transformations in the standard architecture for domain-specific modeling solutions is extended with action reports, which allow synchronization between models, generated code, and target interpreters. The basic idea behind the approach is to use M2T transformation languages to construct submodels, client application components, and operations on target interpreters. In this manner, M2T transformations may be employed to support not only generation of target platform code from domain-specific graphical language (DSGL) models but also straightforward use of models and appropriate DSM tools as client applications. The applicability of action reports is demonstrated by examples from document engineering, and measurement and control systems.


2020 ◽  
Vol 245 ◽  
pp. 04044
Author(s):  
Jérôme Fulachier ◽  
Jérôme Odier ◽  
Fabian Lambert

This document describes the design principles of the Metadata Querying Language (MQL) implemented in ATLAS Metadata Interface (AMI), a metadata-oriented domain-specific language allowing to query databases without knowing the relation between tables. With this simplified yet generic grammar, MQL permits writing complex queries more simply than with Structured Query Language (SQL).


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Jakub Galgonek ◽  
Jiří Vondrášek

AbstractThe Resource Description Framework (RDF), together with well-defined ontologies, significantly increases data interoperability and usability. The SPARQL query language was introduced to retrieve requested RDF data and to explore links between them. Among other useful features, SPARQL supports federated queries that combine multiple independent data source endpoints. This allows users to obtain insights that are not possible using only a single data source. Owing to all of these useful features, many biological and chemical databases present their data in RDF, and support SPARQL querying. In our project, we primary focused on PubChem, ChEMBL and ChEBI small-molecule datasets. These datasets are already being exported to RDF by their creators. However, none of them has an official and currently supported SPARQL endpoint. This omission makes it difficult to construct complex or federated queries that could access all of the datasets, thus underutilising the main advantage of the availability of RDF data. Our goal is to address this gap by integrating the datasets into one database called the Integrated Database of Small Molecules (IDSM) that will be accessible through a SPARQL endpoint. Beyond that, we will also focus on increasing mutual interoperability of the datasets. To realise the endpoint, we decided to implement an in-house developed SPARQL engine based on the PostgreSQL relational database for data storage. In our approach, data are stored in the traditional relational form, and the SPARQL engine translates incoming SPARQL queries into equivalent SQL queries. An important feature of the engine is that it optimises the resulting SQL queries. Together with optimisations performed by PostgreSQL, this allows efficient evaluations of SPARQL queries. The endpoint provides not only querying in the dataset, but also the compound substructure and similarity search supported by our Sachem project. Although the endpoint is accessible from an internet browser, it is mainly intended to be used for programmatic access by other services, for example as a part of federated queries. For regular users, we offer a rich web application called ChemWebRDF using the endpoint. The application is publicly available at https://idsm.elixir-czech.cz/chemweb/.


Author(s):  
Rusul Yousif Alsalhee ◽  
Abdulhussein Mohsin Abdullah

<p>The Holy Quran, due to it is full of many inspiring stories and multiple lessons that need to understand it requires additional attention when it comes to searching issues and information retrieval. Many works were carried out in the Holy Quran field, but some of these dealt with a part of the Quran or covered it in general, and some of them did not support semantic research techniques and the possibility of understanding the Quranic knowledge by the people and computers. As for others, techniques of data analysis, processing, and ontology were adopted, which led to directed these to linguistic aspects more than semantic. Another weakness in the previous works, they have adopted the method manually entering ontology, which is costly and time-consuming. In this paper, we constructed the ontology of Quranic stories. This ontology depended in its construction on the MappingMaster domain-specific language (MappingMaster DSL)technology, through which concepts and individuals can be created and linked automatically to the ontology from Excel sheets. The conceptual structure was built using the object role modeling (ORM) modeling language. SPARQL query language used to test and evaluate the propsed ontology by asking many competency questions and as a result, the ontology answered all these questions well.</p>


2021 ◽  
Vol 18 ◽  
pp. 569-580
Author(s):  
Kateryna Kraus ◽  
Nataliia Kraus ◽  
Oleksandr Manzhura

The purpose of the research is to present the features of digitization of business processes in enterprises as a foundation on which the gradual formation of Industry 4.0 and the search for economic growth in new virtual reality, which has every chance to be a decisive step in implementing digital strategy for Ukraine and development of the innovation ecosystem. Key problems that arise during the digitalization of business processes in enterprises are presented, among which are: the historical orientation of production to mass, “running” sizes and large batches; large-scale production load; the complexity of cooperation and logic between production sites. It is determined that high-quality and effective tools of innovation-digital transformation in the conditions of virtual reality should include: a single system of on-line order management for all enterprises (application registration – technical expertise – planning – performance control – shipment); Smart Factory, Predictive Maintenance, IIoT, CRM, SCM. Features of digital transformation in the part of formation of enterprises of the ecosystem of Industry 4.0 are revealed. The capabilities and benefits of using Azure cloud platform in enterprises, which includes more than 200 products and cloud services, are analyzed. Azure is said to support open source technologies, so businesses have the ability to use tools and technologies they prefer and are more useful. After conducting a thorough analysis of the acceleration of deep digitalization of business processes by enterprises, authors proposed to put into practice Aruba solution for tracking contacts in the fight against COVID-19. Aruba technology helps locate, allowing you to implement flexible solutions based on Aruba Partner Ecosystem using a USB interface. It is proposed to use SYNTEGRA – a data integration service that provides interactive analytics and provides data models and dashboards in order to accelerate the modernization of data storage and management, optimize reporting in the company and obtain real-time analytics. The possibilities of using Azure cloud platform during the digitization of business processes of enterprises of the ecosystem of Industry 4.0 in the conditions of virtual reality are determined.


Author(s):  
Kayalvili S ◽  
Sowmitha V

Cloud computing enables users to accumulate their sensitive data into cloud service providers to achieve scalable services on-demand. Outstanding security requirements arising from this means of data storage and management include data security and privacy. Attribute-based Encryption (ABE) is an efficient encryption system with fine-grained access control for encrypting out-sourced data in cloud computing. Since data outsourcing systems require flexible access control approach Problems arises when sharing confidential corporate data in cloud computing. User-Identity needs to be managed globally and access policies can be defined by several authorities. Data is dual encrypted for more security and to maintain De-Centralization in Multi-Authority environment.


Sign in / Sign up

Export Citation Format

Share Document