descriptive complexity
Recently Published Documents


TOTAL DOCUMENTS

128
(FIVE YEARS 27)

H-INDEX

12
(FIVE YEARS 1)

2022 ◽  
Vol 69 (1) ◽  
pp. 1-83
Author(s):  
Mark Kaminski ◽  
Egor V. Kostylev ◽  
Bernardo Cuenca Grau ◽  
Boris Motik ◽  
Ian Horrocks

Motivated by applications in declarative data analysis, in this article, we study Datalog Z —an extension of Datalog with stratified negation and arithmetic functions over integers. This language is known to be undecidable, so we present the fragment of limit Datalog Z programs, which is powerful enough to naturally capture many important data analysis tasks. In limit Datalog Z , all intensional predicates with a numeric argument are limit predicates that keep maximal or minimal bounds on numeric values. We show that reasoning in limit Datalog Z is decidable if a linearity condition restricting the use of multiplication is satisfied. In particular, limit-linear Datalog Z is complete for Δ 2 EXP and captures Δ 2 P over ordered datasets in the sense of descriptive complexity. We also provide a comprehensive study of several fragments of limit-linear Datalog Z . We show that semi-positive limit-linear programs (i.e., programs where negation is allowed only in front of extensional atoms) capture coNP over ordered datasets; furthermore, reasoning becomes coNEXP-complete in combined and coNP-complete in data complexity, where the lower bounds hold already for negation-free programs. In order to satisfy the requirements of data-intensive applications, we also propose an additional stability requirement, which causes the complexity of reasoning to drop to EXP in combined and to P in data complexity, thus obtaining the same bounds as for usual Datalog. Finally, we compare our formalisms with the languages underpinning existing Datalog-based approaches for data analysis and show that core fragments of these languages can be encoded as limit programs; this allows us to transfer decidability and complexity upper bounds from limit programs to other formalisms. Therefore, our article provides a unified logical framework for declarative data analysis which can be used as a basis for understanding the impact on expressive power and computational complexity of the key constructs available in existing languages.


2021 ◽  
pp. 1-22
Author(s):  
DYLAN AIREY ◽  
STEVE JACKSON ◽  
BILL MANCE

2021 ◽  
Author(s):  
Suparna Mukherjee ◽  
Anthony Hennig ◽  
Taylan G. Topcu ◽  
Zoe Szajnfarber

Abstract Decomposition is a dominant design strategy because it enables complex problems to be broken up into more manageable modules. However, although it is well known that complex systems are rarely fully decomposable, much of the decomposition literature is framed around reordering or clustering processes that optimize an objective function to yield a module assignment. As illustrated in this study, these approaches overlook the fact that decoupling partially decomposeable modules can require significant additional design work, with associated consequences that introduce considerable information to the design space. This paper draws on detailed empirical evidence from a NASA space robotics field experiment to elaborate mechanisms through which the processes of decomposing can add information and associated descriptive complexity to the problem space. Contrary to widely held expectations, we show that complexity can increase substantially when natural system modules are fully decoupled from one another to support parallel design. We explain this phenomenon through two mechanisms: interface creation and functional allocation. These findings have implications for the ongoing discussion of optimal module identification as part of the decomposition process. We contend that the sometimes-significant costs of later stages of design decomposition are not adequately considered in existing methods. With this work we lay a foundation for valuing these performance, schedule and complexity costs earlier in the decomposition process.


2021 ◽  
Vol 64 (5) ◽  
pp. 98-105
Author(s):  
Martin Grohe ◽  
Daniel Neuen

We investigate the interplay between the graph isomorphism problem, logical definability, and structural graph theory on a rich family of dense graph classes: graph classes of bounded rank width. We prove that the combinatorial Weisfeiler-Leman algorithm of dimension (3 k + 4) is a complete isomorphism test for the class of all graphs of rank width at most k. A consequence of our result is the first polynomial time canonization algorithm for graphs of bounded rank width. Our second main result addresses an open problem in descriptive complexity theory: we show that fixed-point logic with counting expresses precisely the polynomial time properties of graphs of bounded rank width.


Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 96
Author(s):  
Max Bannach ◽  
Till Tantau

Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.


2021 ◽  
Vol 25 (2) ◽  
pp. 1103-1115
Author(s):  
Elnaz Azmi ◽  
Uwe Ehret ◽  
Steven V. Weijs ◽  
Benjamin L. Ruddell ◽  
Rui A. P. Perdigão

Abstract. One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model's ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model's effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, process-based, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model's descriptive complexity – also in bit.


Author(s):  
Flavio Ferrarotti ◽  
Senén González ◽  
José María Turull Torres ◽  
Jan Van den Bussche ◽  
Jonni Virtema

2021 ◽  
Vol 116 ◽  
pp. 40-54
Author(s):  
Arnaud Durand ◽  
Anselm Haak ◽  
Juha Kontinen ◽  
Heribert Vollmer

Sign in / Sign up

Export Citation Format

Share Document