scholarly journals Some Things are Easier for the Dumb and the Bright Ones (Beware the Average!)

Author(s):  
Wojciech Jamroga ◽  
Michał Knapik

Model checking strategic abilities in multi-agent systems is hard, especially for agents with partial observability of the state of the system. In that case, it ranges from NP-complete to undecidable, depending on the precise syntax and the semantic variant. That, however, is the worst case complexity, and the problem might as well be easier when restricted to particular subclasses of inputs. In this paper, we look at the verification of models with "extreme" epistemic structure, and identify several special cases for which model checking is easier than in general. We also prove that, in the other cases, no gain is possible even if the agents have almost full (or almost nil) observability. To prove the latter kind of results, we develop generic techniques that may be useful also outside of this study.

2020 ◽  
Vol 34 (2) ◽  
Author(s):  
Riccardo De Masellis ◽  
Valentin Goranko

Abstract We develop a logic-based framework for formal specification and algorithmic verification of homogeneous and dynamic concurrent multi-agent transition systems. Homogeneity means that all agents have the same available actions at any given state and the actions have the same effects regardless of which agents perform them. The state transitions are therefore determined only by the vector of numbers of agents performing each action and are specified symbolically, by means of conditions on these numbers definable in Presburger arithmetic. The agents are divided into controllable (by the system supervisor/controller) and uncontrollable, representing the environment or adversary. Dynamicity means that the numbers of controllable and uncontrollable agents may vary throughout the system evolution, possibly at every transition. As a language for formal specification we use a suitably extended version of Alternating-time Temporal Logic, where one can specify properties of the type “a coalition of (at least) n controllable agents can ensure against (at most) m uncontrollable agents that any possible evolution of the system satisfies a given objective $$\gamma$$ γ ″, where $$\gamma$$ γ is specified again as a formula of that language and each of n and m is either a fixed number or a variable that can be quantified over. We provide formal semantics to our logic $${\mathcal {L}}_{\textsc {hdmas}}$$ L H D M A S and define normal form of its formulae. We then prove that every formula in $${\mathcal {L}}_{\textsc {hdmas}}$$ L H D M A S is equivalent in the finite to one in a normal form and develop an algorithm for global model checking of formulae in normal form in finite HDMAS models, which invokes model checking truth of Presburger formulae. We establish worst case complexity estimates for the model checking algorithm and illustrate it on a running example.


Author(s):  
Simona Cocco ◽  
Rémi Monasson

The computational effort needed to deal with large combinatorial structures varies considerably with the task to be performed and the resolution procedure used [425]. The worst-case complexity of a decision or optimization problem is defined as the time required by the best algorithm to treat any possible input to the problem. For instance, the worst-case complexity of the problem of sorting a list of n numbers scales as n log n: there exist several algorithms that can order any list in at most ~ n log n elementary operations, and none with asymptotically fewer operations. Unfortunately, the worst-case complexities of many important computational problems, called NP-complete, are not known. Partitioning a list of n numbers in two sets with equal partial sums is one among hundreds of known NP-complete problems. It is a fundamental conjecture of theoretical computer science that there exists no algorithm capable of partitioning any list of length n, or of solving any other NP-complete problem with inputs of size n, in a time bounded by a polynomial of n. Therefore, when trying to solve such a problem exactly, one necessarily uses algorithms that may take exponential time on some inputs. Quantifying how“frequent” these hard inputs are for a given algorithm is the question answered by the analysis of algorithms. We will present an overview of recent work by physicists to address this point, and more precisely to characterize the average performance—hereafter simply called complexity—of a given algorithm over a distribution of inputs to a computational problem. The history of algorithm analysis by physical methods and ideas is at least as old as the use of computers by physicists. One well-established chapter in this history is the analysis of Monte Carlo sampling algorithms for statistical mechanics models. It is well known that phase transitions, that is, abrupt changes in the physical properties of the model, can imply a dramatic increase in the time necessary for the sampling procedure. This phenomenon is commonly known as critical slowing down. The physicist's insight comes from the analogy between the dynamics of algorithms and the physical dynamics of the system. That analogy is quite natural: in fact many algorithms mimic the physical dynamics.


10.29007/4t8s ◽  
2018 ◽  
Author(s):  
Sagi Bazinin ◽  
Guy Shani

QDec-POMDPs are a qualitative alternative to stochastic Dec-POMDPs for goal-oriented plan- ning in cooperative partially observable multi-agent environments. Although QDec-POMDPs share the same worst case complexity as Dec-POMDPs, previous research has shown an ability to scale up to larger domains while producing high quality plan trees. A key difficulty in distributed execution is the need to construct a joint plan tree branching on the combinations of observations of all agents. In this work, we suggest an iterative algorithm, IMAP, that plans for one agent at a time, taking into considerations collaboration constraints about action execution of previous agents, and generating new constraints for the next agents. We explain how these constraints are generated and handled, and a backtracking mechanism for changing constraints that cannot be met. We provide experimental results on multi-agent planning domains, showing our methods to scale to much larger problems with several collaborating agents and huge state spaces.


2021 ◽  
Vol 35 (2) ◽  
Author(s):  
Yehia Abd Alrahman ◽  
Nir Piterman

AbstractWe propose a formalism to model and reason about reconfigurable multi-agent systems. In our formalism, agents interact and communicate in different modes so that they can pursue joint tasks; agents may dynamically synchronize, exchange data, adapt their behaviour, and reconfigure their communication interfaces. Inspired by existing multi-robot systems, we represent a system as a set of agents (each with local state), executing independently and only influence each other by means of message exchange. Agents are able to sense their local states and partially their surroundings. We extend ltl to be able to reason explicitly about the intentions of agents in the interaction and their communication protocols. We also study the complexity of satisfiability and model-checking of this extension.


2020 ◽  
Vol 34 (05) ◽  
pp. 7071-7078
Author(s):  
Francesco Belardinelli ◽  
Alessio Lomuscio ◽  
Emily Yu

We study the problem of verifying multi-agent systems under the assumption of bounded recall. We introduce the logic CTLKBR, a bounded-recall variant of the temporal-epistemic logic CTLK. We define and study the model checking problem against CTLK specifications under incomplete information and bounded recall and present complexity upper bounds. We present an extension of the BDD-based model checker MCMAS implementing model checking under bounded recall semantics and discuss the experimental results obtained.


2008 ◽  
Vol 195 ◽  
pp. 133-149
Author(s):  
Mario Benevides ◽  
Carla Delgado ◽  
Carlos Pombo ◽  
Luis Lopes ◽  
Ricardo Ribeiro

2015 ◽  
Vol 10 (4) ◽  
pp. 699-708 ◽  
Author(s):  
M. Dodangeh ◽  
L. N. Vicente ◽  
Z. Zhang

Sign in / Sign up

Export Citation Format

Share Document