formal framework
Recently Published Documents


TOTAL DOCUMENTS

815
(FIVE YEARS 185)

H-INDEX

34
(FIVE YEARS 6)

2021 ◽  
Vol 46 (4) ◽  
pp. 1-49
Author(s):  
Alejandro Grez ◽  
Cristian Riveros ◽  
Martín Ugarte ◽  
Stijn Vansummeren

Complex event recognition (CER) has emerged as the unifying field for technologies that require processing and correlating distributed data sources in real time. CER finds applications in diverse domains, which has resulted in a large number of proposals for expressing and processing complex events. Existing CER languages lack a clear semantics, however, which makes them hard to understand and generalize. Moreover, there are no general techniques for evaluating CER query languages with clear performance guarantees. In this article, we embark on the task of giving a rigorous and efficient framework to CER. We propose a formal language for specifying complex events, called complex event logic (CEL), that contains the main features used in the literature and has a denotational and compositional semantics. We also formalize the so-called selection strategies, which had only been presented as by-design extensions to existing frameworks. We give insight into the language design trade-offs regarding the strict sequencing operators of CEL and selection strategies. With a well-defined semantics at hand, we discuss how to efficiently process complex events by evaluating CEL formulas with unary filters. We start by introducing a formal computational model for CER, called complex event automata (CEA), and study how to compile CEL formulas with unary filters into CEA. Furthermore, we provide efficient algorithms for evaluating CEA over event streams using constant time per event followed by output-linear delay enumeration of the results.


2021 ◽  
Author(s):  
◽  
Jorge Morales Delgado

<p>Our research examines the problem of multiple lines of reasoning reaching the same conclusion, but only through different and unrelated arguments. In the context of non-monotonic logic, these types of conclusions are referred to as floating conclusions. The field of defeasible reasoning is divided between those who claim that floating conclusions ought not to be accepted through a prudent or skeptical point of view, whereas others argue that they are good enough conclusions to be admitted even from a conservative or skeptical standard. We approach the problem of floating conclusions through the formal framework of Inheritance Networks. These networks provide the simplest and most straightforward gateway into the technical aspects surrounding floating conclusions in the context of non-monotonic logic and defeasible reasoning.  To address the problem of floating conclusions, we construct a unifying framework of analysis, namely, the Source Conflict Cost Criterion (SCCC), that contains two basic elements: source conflict and cost. Both elements are simplified through a binary model, through which we provide a comprehensive understanding of the floating conclusions as well as the problematic nature of the debate surrounding this type of inferences. The SCCC addresses three key objectives: (a) the assessment of floating conclusions and the debate surrounding its epistemological dimension, (b) the construction of a general and unified framework of analysis for floating conclusions, and (c) the specification of the normative conditions for the admission of floating conclusions as skeptically acceptable information.</p>


2021 ◽  
Author(s):  
◽  
Jorge Morales Delgado

<p>Our research examines the problem of multiple lines of reasoning reaching the same conclusion, but only through different and unrelated arguments. In the context of non-monotonic logic, these types of conclusions are referred to as floating conclusions. The field of defeasible reasoning is divided between those who claim that floating conclusions ought not to be accepted through a prudent or skeptical point of view, whereas others argue that they are good enough conclusions to be admitted even from a conservative or skeptical standard. We approach the problem of floating conclusions through the formal framework of Inheritance Networks. These networks provide the simplest and most straightforward gateway into the technical aspects surrounding floating conclusions in the context of non-monotonic logic and defeasible reasoning.  To address the problem of floating conclusions, we construct a unifying framework of analysis, namely, the Source Conflict Cost Criterion (SCCC), that contains two basic elements: source conflict and cost. Both elements are simplified through a binary model, through which we provide a comprehensive understanding of the floating conclusions as well as the problematic nature of the debate surrounding this type of inferences. The SCCC addresses three key objectives: (a) the assessment of floating conclusions and the debate surrounding its epistemological dimension, (b) the construction of a general and unified framework of analysis for floating conclusions, and (c) the specification of the normative conditions for the admission of floating conclusions as skeptically acceptable information.</p>


2021 ◽  
Author(s):  
Moritz Marbach

Social scientists have long been interested in the persistent effects of history on contemporary behavior and attitudes. To estimate legacy effects, studies typically compare people living in places that were historically exposed to some event and those that were not. Using principal stratification, we provide a formal framework to analyze how migration limits our ability to learn about the persistent effects of history from observed differences between historically exposed and unexposed places. We state the necessary assumptions about movement behavior to causally identify legacy effects. We highlight that these assumptions are strong; therefore, we recommend that legacy studies circumvent bias by collecting data on people's place of residence at the exposure time. Reexamining a study on the persistent effects of US civil-rights protests, we show that observed attitudinal differences between residents and non-residents of historic protest sites are more likely due to migration rather than attitudinal change.


2021 ◽  
Author(s):  
◽  
David Frieder Georg Lempp

<p>The aim of this thesis is to explore the extent to which formal logic can be applied to the topic of conflict analysis and conflict resolution. It is motivated by the idea that conflicts can be understood as inconsistent sets of goals, beliefs, norms, emotions, or the like. To achieve this aim, two formal frameworks are presented. Conflict Modelling Logic (CML) is a logical system, based on branching-time temporal logic, which can be used to describe and interpret conflicts. Conflict Resolution Logic (CRL) is a set of five algorithms, inspired by the AGM model of belief revision, which can be used to generate possible solutions to conflicts. Furthermore, two numerical measures for the 'potential conflict power' of propositional formulae and the 'degree of inconsisteny' of sets of propositional formulae are introduced. The two measures allow one to assess the role of particular elements within a conflict and the depth of a conflict. The formal framework is illustrated with the example conflict of the Second Congo War.</p>


2021 ◽  
Author(s):  
◽  
David Frieder Georg Lempp

<p>The aim of this thesis is to explore the extent to which formal logic can be applied to the topic of conflict analysis and conflict resolution. It is motivated by the idea that conflicts can be understood as inconsistent sets of goals, beliefs, norms, emotions, or the like. To achieve this aim, two formal frameworks are presented. Conflict Modelling Logic (CML) is a logical system, based on branching-time temporal logic, which can be used to describe and interpret conflicts. Conflict Resolution Logic (CRL) is a set of five algorithms, inspired by the AGM model of belief revision, which can be used to generate possible solutions to conflicts. Furthermore, two numerical measures for the 'potential conflict power' of propositional formulae and the 'degree of inconsisteny' of sets of propositional formulae are introduced. The two measures allow one to assess the role of particular elements within a conflict and the depth of a conflict. The formal framework is illustrated with the example conflict of the Second Congo War.</p>


Author(s):  
Theofanis Aravanis

Belief Revision is a well-established field of research that deals with how agents rationally change their minds in the face of new information. The milestone of Belief Revision is a general and versatile formal framework introduced by Alchourrón, Gärdenfors and Makinson, known as the AGM paradigm, which has been, to this date, the dominant model within the field. A main shortcoming of the AGM paradigm, as originally proposed, is its lack of any guidelines for relevant change. To remedy this weakness, Parikh proposed a relevance-sensitive axiom, which applies on splittable theories; i.e., theories that can be divided into syntax-disjoint compartments. The aim of this article is to provide an epistemological interpretation of the dynamics (revision) of splittable theories, from the perspective of Kuhn's inuential work on the evolution of scientific knowledge, through the consideration of principal belief-change scenarios. The whole study establishes a conceptual bridge between rational belief revision and traditional philosophy of science, which sheds light on the application of formal epistemological tools on the dynamics of knowledge.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jeffrey C. Cegan ◽  
Maureen S. Golan

Abstract The process used to determine site suitability for military base camps lacks a formal framework for reducing relative risks to soldier safety and maximise mission effectiveness. Presently, military personnel responsible for determining site suitability of a base camp must assess large amounts of geographic, socioeconomic and logistical data, without a decision analysis framework to aid in the process. By adopting a multicriteria decision analysis (MCDA) framework to determine site suitability of base camps, battlespace commanders can make better, more defensible decisions. This paper surveys US Army officers with recent base camp experience to develop a set of initial criteria and weights relevant to base camp site selection. The developed decision framework is demonstrated using an MCDA methodology in an illustrative example to compare alternative base camp locations within a designated Area of Interest (AoI). Leveraging the site ranking output and/or criteria weights resulting from the methodology provides decision-making support that can be used in the field when time, resources and data may not be readily available.


2021 ◽  
Vol 72 ◽  
pp. 613-665
Author(s):  
Vu-Linh Nguyen ◽  
Eyke Hüllermeier

In contrast to conventional (single-label) classification, the setting of multilabel classification (MLC) allows an instance to belong to several classes simultaneously. Thus, instead of selecting a single class label, predictions take the form of a subset of all labels. In this paper, we study an extension of the setting of MLC, in which the learner is allowed to partially abstain from a prediction, that is, to deliver predictions on some but not necessarily all class labels. This option is useful in cases of uncertainty, where the learner does not feel confident enough on the entire label set. Adopting a decision-theoretic perspective, we propose a formal framework of MLC with partial abstention, which builds on two main building blocks: First, the extension of underlying MLC loss functions so as to accommodate abstention in a proper way, and second the problem of optimal prediction, that is, finding the Bayes-optimal prediction minimizing this generalized loss in expectation. It is well known that different (generalized) loss functions may have different risk-minimizing predictions, and finding the Bayes predictor typically comes down to solving a computationally complexity optimization problem. In the most general case, given a prediction of the (conditional) joint distribution of possible labelings, the minimizer of the expected loss needs to be found over a number of candidates which is exponential in the number of class labels. We elaborate on properties of risk minimizers for several commonly used (generalized) MLC loss functions, show them to have a specific structure, and leverage this structure to devise efficient methods for computing Bayes predictors. Experimentally, we show MLC with partial abstention to be effective in the sense of reducing loss when being allowed to abstain.


2021 ◽  
Vol 5 (4) ◽  
pp. 1-25
Author(s):  
Colin Shea-Blymyer ◽  
Houssam Abbas

In this article, we develop a formal framework for automatic reasoning about the obligations of autonomous cyber-physical systems, including their social and ethical obligations. Obligations, permissions, and prohibitions are distinct from a system's mission, and are a necessary part of specifying advanced, adaptive AI-equipped systems. They need a dedicated deontic logic of obligations to formalize them. Most existing deontic logics lack corresponding algorithms and system models that permit automatic verification. We demonstrate how a particular deontic logic, Dominance Act Utilitarianism (DAU) [23], is a suitable starting point for formalizing the obligations of autonomous systems like self-driving cars. We demonstrate its usefulness by formalizing a subset of Responsibility-Sensitive Safety (RSS) in DAU; RSS is an industrial proposal for how self-driving cars should and should not behave in traffic. We show that certain logical consequences of RSS are undesirable, indicating a need to further refine the proposal. We also demonstrate how obligations can change over time, which is necessary for long-term autonomy. We then demonstrate a model-checking algorithm for DAU formulas on weighted transition systems and illustrate it by model-checking obligations of a self-driving car controller from the literature.


Sign in / Sign up

Export Citation Format

Share Document