formal specifications
Recently Published Documents


TOTAL DOCUMENTS

504
(FIVE YEARS 43)

H-INDEX

21
(FIVE YEARS 2)

2021 ◽  
Vol 72 ◽  
pp. 1029-1082
Author(s):  
George K. Atia ◽  
Andre Beckus ◽  
Ismail Alkhouri ◽  
Alvaro Velasquez

The planning domain has experienced increased interest in the formal synthesis of decision-making policies. This formal synthesis typically entails finding a policy which satisfies formal specifications in the form of some well-defined logic. While many such logics have been proposed with varying degrees of expressiveness and complexity in their capacity to capture desirable agent behavior, their value is limited when deriving decision-making policies which satisfy certain types of asymptotic behavior in general system models. In particular, we are interested in specifying constraints on the steady-state behavior of an agent, which captures the proportion of time an agent spends in each state as it interacts for an indefinite period of time with its environment. This is sometimes called the average or expected behavior of the agent and the associated planning problem is faced with significant challenges unless strong restrictions are imposed on the underlying model in terms of the connectivity of its graph structure. In this paper, we explore this steady-state planning problem that consists of deriving a decision-making policy for an agent such that constraints on its steady-state behavior are satisfied. A linear programming solution for the general case of multichain Markov Decision Processes (MDPs) is proposed and we prove that optimal solutions to the proposed programs yield stationary policies with rigorous guarantees of behavior.


2021 ◽  
Vol 11 (22) ◽  
pp. 11061
Author(s):  
Juan Francisco Mendoza-Moreno ◽  
Luz Santamaria-Granados ◽  
Anabel Fraga Vázquez ◽  
Gustavo Ramirez-Gonzalez

Tourist traceability is the analysis of the set of actions, procedures, and technical measures that allows us to identify and record the space–time causality of the tourist’s touring, from the beginning to the end of the chain of the tourist product. Besides, the traceability of tourists has implications for infrastructure, transport, products, marketing, the commercial viability of the industry, and the management of the destination’s social, environmental, and cultural impact. To this end, a tourist traceability system requires a knowledge base for processing elements, such as functions, objects, events, and logical connectors among them. A knowledge base provides us with information on the preparation, planning, and implementation or operation stages. In this regard, unifying tourism terminology in a traceability system is a challenge because we need a central repository that promotes standards for tourists and suppliers in forming a formal body of knowledge representation. Some studies are related to the construction of ontologies in tourism, but none focus on tourist traceability systems. For the above, we propose OntoTouTra, an ontology that uses formal specifications to represent knowledge of tourist traceability systems. This paper outlines the development of the OntoTouTra ontology and how we gathered and processed data from ubiquitous computing using Big Data analysis techniques.


2021 ◽  
Author(s):  
Roman Jaramillo Cajica ◽  
Raul Ernesto Gonzalez Torres ◽  
Pedro Mejia Alvarez

2021 ◽  
Author(s):  
◽  
Chia-wen Fang

<p>Ontologies are formal specifications of shared conceptualizations of a domain. Important applications of ontologies include distributed knowledge based systems, such as the semantic web, and the evaluation of modelling languages, e.g. for business process or conceptual modelling. These applications require formal ontologies of good quality. In this thesis, we present a multi-method ontology evaluation methodology, which consists of two techniques (sentence verification task and recall) based on principles of cognitive psychology, to test how well a specification of a formal ontology corresponds to the ontology users' conceptualization of a domain. Two experiments were conducted, each evaluating the SUMO ontology and WordNet with an experimental technique, as demonstrations of the multi-method evaluation methodology. We also tested the applicability of the two evaluation techniques by conducting a replication study for each. The replication studies obtained findings that point towards the same direction as the original studies, although no significance was achieved. Overall, the evaluation using the multi-method methodology suggests that neither of the two ontologies we examined is a good specification of the conceptualization of the domain. Both the terminology and the structure of the ontologies, may benefit from improvement.</p>


2021 ◽  
Author(s):  
◽  
Chia-wen Fang

<p>Ontologies are formal specifications of shared conceptualizations of a domain. Important applications of ontologies include distributed knowledge based systems, such as the semantic web, and the evaluation of modelling languages, e.g. for business process or conceptual modelling. These applications require formal ontologies of good quality. In this thesis, we present a multi-method ontology evaluation methodology, which consists of two techniques (sentence verification task and recall) based on principles of cognitive psychology, to test how well a specification of a formal ontology corresponds to the ontology users' conceptualization of a domain. Two experiments were conducted, each evaluating the SUMO ontology and WordNet with an experimental technique, as demonstrations of the multi-method evaluation methodology. We also tested the applicability of the two evaluation techniques by conducting a replication study for each. The replication studies obtained findings that point towards the same direction as the original studies, although no significance was achieved. Overall, the evaluation using the multi-method methodology suggests that neither of the two ontologies we examined is a good specification of the conceptualization of the domain. Both the terminology and the structure of the ontologies, may benefit from improvement.</p>


2021 ◽  
Vol 24 (5) ◽  
pp. 902-922
Author(s):  
Алексей Вячеславович Никешин ◽  
Виктор Зиновьевич Шнитман

This paper presents the experience of verifying server implementations of the TLS cryptographic protocol version 1.3. TLS is a widely used cryptographic protocol designed to create secure data transmission channels and provides the necessary functionality for this: confidentiality of the transmitted data, data integrity, and authentication of the parties. The new version 1.3 of the TLS protocol was introduced in August 2018 and has a number of significant differences compared to the previous version 1.2. A number of TLS developers have already included support for the latest version in their implementations. These circumstances make it relevant to do research in the field of verification and security of the new TLS protocol implementations. We used a new test suite for verifying implementations of the TLS 1.3 for compliance with Internet specifications, developed on the basis of the RFC8446, using UniTESK technology and mutation testing methods. The current work is part of the TLS 1.3 protocol verification project and covers some of the additional functionality and optional protocol extensions. To test implementations for compliance with formal specifications, UniTESK technology is used, which provides testing automation tools based on the use of finite state machines. The states of the system under test define the states of the state machine, and the test effects are the transitions of this machine. When performing a transition, the specified impact is passed to the implementation under test, after which the implementation's reactions are recorded and a verdict is automatically made on the compliance of the observed behavior with the specification. Mutational testing methods are used to detect non-standard behavior of the system under test by transmitting incorrect data. Some changes are made to the protocol exchange flow created in accordance with the specification: either the values of the message fields formed on the basis of the developed protocol model are changed, or the order of messages in the exchange flow is changed. The protocol model allows one to make changes to the data flow at any stage of the network exchange, which allows the test scenario to pass through all the significant states of the protocol and in each such state to test the implementation in accordance with the specified program. So far, several implementations have been found to deviate from the specification. The presented approach has proven effective in several of our projects when testing network protocols, providing detection of various deviations from the specification and other errors.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-26
Author(s):  
Nikhil Kumar Singh ◽  
Indranil Saha

The growing use of complex Cyber-Physical Systems (CPSs) in safety-critical applications has led to the demand for the automatic synthesis of robust feedback controllers that satisfy a given set of formal specifications. Controller synthesis from the high-level specification is an NP-Hard problem. We propose a heuristic-based automated technique that synthesizes feedback controllers guided by Signal Temporal Logic (STL) specifications. Our technique involves rigorous analysis of the traces generated by the closed-loop system, matrix decomposition, and an incremental multi-parameter tuning procedure. In case a controller cannot be found to satisfy all the specifications, we propose a technique for modifying the unsatisfiable specifications so that the controller synthesized for the satisfiable subset of specifications now also satisfies the modified specifications. We demonstrate our technique on eleven controllers used as standard closed-loop control system benchmarks, including complex controllers having multiple independent or nested control loops. Our experimental results establish that the proposed algorithm can automatically solve complex feedback controller synthesis problems within a few minutes.


2021 ◽  
Author(s):  
Jochen Jung ◽  
David Alexander Back ◽  
Julian Scherer ◽  
Felix Fellmer ◽  
Georg Osterhoff ◽  
...  

BACKGROUND The establishment of smartphones as the most important communication medium of the 21st century has led to usage of mobile messenger services also in the medical context. However, the use of the most commonly used smartphone-app WhatsApp in a medical treatment context represents an incalculable risk from a legal point of view (data protection) and can ultimately lead to a violation of medical confidentiality with potential legal consequences. OBJECTIVE Therefore, this study aimed to assess which alternatives in terms of messenger applications exist for secure communication of patient-related data. METHODS A systematic literature and online “Appstore” search was conducted to identify secure messenger services. These had to comply with currently valid technical and legal formal specifications in terms of data security as well as to provide similar usability and functions as WhatsApp. RESULTS A total of 13 messenger apps were identified. However, only 5 apps (Famedly, JOIN, Siilo, Threema and Trustner) met the formal requirements as a secure communication medium. JOIN is the only service which has been approved by the FDA and is classified as a medical device. CONCLUSIONS The current practice of rather random and unstructured use of messenger apps in everyday hospital life should be a thing of the past. From today's perspective, the apps Famedly, JOIN, Siilo, Threema and Trustner are recommended. They have considerable advantages over the apps used in everyday clinical practice today (like e.g., WhatsApp). The rapid developments on the software market will certainly drive further developments, so that the recommendation formulated here is only a snapshot.


Author(s):  
Matt Luckcuck

Formal Methods are mathematically-based techniques for software design and engineering, which enable the unambiguous description of and reasoning about a system’s behaviour. Autonomous systems use software to make decisions without human control, are often embedded in a robotic system, are often safety-critical, and are increasingly being introduced into everyday settings. Autonomous systems need robust development and verification methods, but formal methods practitioners are often asked: Why use Formal Methods for Autonomous Systems? To answer this question, this position paper describes five recipes for formally verifying aspects of an autonomous system, collected from the literature. The recipes are examples of how Formal Methods can be an effective tool for the development and verification of autonomous systems. During design, they enable unambiguous description of requirements; in development, formal specifications can be verified against requirements; software components may be synthesised from verified specifications; and behaviour can be monitored at runtime and compared to its original specification. Modern Formal Methods often include highly automated tool support, which enables exhaustive checking of a system’s state space. This paper argues that Formal Methods are a powerful tool for the repertoire of development techniques for safe autonomous systems, alongside other robust software engineering techniques.


Author(s):  
Liliana Maria Favre

Systems and applications aligned with new paradigms such as cloud computing and internet of the things are becoming more complex and interconnected, expanding the areas in which they are susceptible to attacks. Their security can be addressed by using model-driven engineering (MDE). In this context, specific IoT or cloud computing metamodels emerged to support the systematic development of software. In general, they are specified through semiformal metamodels in MOF style. This article shows the theoretical foundations of a method for automatically constructing secure metamodels in the context of realizations of MDE such as MDA. The formal metamodeling language Nereus and systems of transformation rules to bridge the gap between formal specifications and MOF are described. The main contribution of this article is the definition of a system of transformation rules called NEREUStoMOF for transforming automatically formal metamodeling specifications in Nereus to semiformal-MOF metamodels annotated in OCL.


Sign in / Sign up

Export Citation Format

Share Document