automated reasoning
Recently Published Documents


TOTAL DOCUMENTS

497
(FIVE YEARS 76)

H-INDEX

19
(FIVE YEARS 3)

2021 ◽  
Vol 72 ◽  
pp. 1307-1341
Author(s):  
Dominic Widdows ◽  
Kirsty Kitto ◽  
Trevor Cohen

In the decade since 2010, successes in artificial intelligence have been at the forefront of computer science and technology, and vector space models have solidified a position at the forefront of artificial intelligence. At the same time, quantum computers have become much more powerful, and announcements of major advances are frequently in the news. The mathematical techniques underlying both these areas have more in common than is sometimes realized. Vector spaces took a position at the axiomatic heart of quantum mechanics in the 1930s, and this adoption was a key motivation for the derivation of logic and probability from the linear geometry of vector spaces. Quantum interactions between particles are modelled using the tensor product, which is also used to express objects and operations in artificial neural networks. This paper describes some of these common mathematical areas, including examples of how they are used in artificial intelligence (AI), particularly in automated reasoning and natural language processing (NLP). Techniques discussed include vector spaces, scalar products, subspaces and implication, orthogonal projection and negation, dual vectors, density matrices, positive operators, and tensor products. Application areas include information retrieval, categorization and implication, modelling word-senses and disambiguation, inference in knowledge bases, decision making, and and semantic composition. Some of these approaches can potentially be implemented on quantum hardware. Many of the practical steps in this implementation are in early stages, and some are already realized. Explaining some of the common mathematical tools can help researchers in both AI and quantum computing further exploit these overlaps, recognizing and exploring new directions along the way.This paper describes some of these common mathematical areas, including examples of how they are used in artificial intelligence (AI), particularly in automated reasoning and natural language processing (NLP). Techniques discussed include vector spaces, scalar products, subspaces and implication, orthogonal projection and negation, dual vectors, density matrices, positive operators, and tensor products. Application areas include information retrieval, categorization and implication, modelling word-senses and disambiguation, inference in knowledge bases, and semantic composition. Some of these approaches can potentially be implemented on quantum hardware. Many of the practical steps in this implementation are in early stages, and some are already realized. Explaining some of the common mathematical tools can help researchers in both AI and quantum computing further exploit these overlaps, recognizing and exploring new directions along the way.


2021 ◽  
Vol 12 (06) ◽  
pp. 37-46
Author(s):  
Ruo Ando ◽  
Yoshiyasu Takefuji

This paper gives complete guidelines for authors submitting papers for the AIRCC Journals. A sliding puzzle is a combination puzzle where a player slides pieces along specific routes on a board to reach a certain end configuration. In this paper, we propose a novel measurement of the complexity of 100 sliding puzzles with paramodulation, which is an inference method of automated reasoning. It turned out that by counting the number of clauses yielded with paramodulation, we can evaluate the difficulty of each puzzle. In the experiment, we have generated 100 * 8 puzzles that passed the solvability checking by countering inversions. By doing this, we can distinguish the complexity of 8 puzzles with the number generated with paramodulation. For example, board [2,3,6,1,7,8,5,4, hole] is the easiest with score 3008 and board [6,5,8,7,4,3,2,1, hole] is the most difficult with score 48653.Besides, we have succeeded in obverse several layers of complexity (the number of clauses generated) in 100 puzzles. We can conclude that the proposed method can provide a new perspective of paramodulation complexity concerning sliding block puzzles.


Author(s):  
YULIYA LIERLER

Abstract Constraint answer set programming or CASP, for short, is a hybrid approach in automated reasoning putting together the advances of distinct research areas such as answer set programming, constraint processing, and satisfiability modulo theories. CASP demonstrates promising results, including the development of a multitude of solvers: acsolver, clingcon, ezcsp, idp, inca, dingo, mingo, aspmt2smt, clingo[l,dl], and ezsmt. It opens new horizons for declarative programming applications such as solving complex train scheduling problems. Systems designed to find solutions to constraint answer set programs can be grouped according to their construction into, what we call, integrational or translational approaches. The focus of this paper is an overview of the key ingredients of the design of constraint answer set solvers drawing distinctions and parallels between integrational and translational approaches. The paper also provides a glimpse at the kind of programs its users develop by utilizing a CASP encoding of Traveling Salesman problem for illustration. In addition, we place the CASP technology on the map among its automated reasoning peers as well as discuss future possibilities for the development of CASP.


Author(s):  
Sarah Sigley ◽  
Olaf Beyersdorff

AbstractWe investigate the proof complexity of modal resolution systems developed by Nalon and Dixon (J Algorithms 62(3–4):117–134, 2007) and Nalon et al. (in: Automated reasoning with analytic Tableaux and related methods—24th international conference, (TABLEAUX’15), pp 185–200, 2015), which form the basis of modal theorem proving (Nalon et al., in: Proceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI’17), pp 4919–4923, 2017). We complement these calculi by a new tighter variant and show that proofs can be efficiently translated between all these variants, meaning that the calculi are equivalent from a proof complexity perspective. We then develop the first lower bound technique for modal resolution using Prover–Delayer games, which can be used to establish “genuine” modal lower bounds for size of dag-like modal resolution proofs. We illustrate the technique by devising a new modal pigeonhole principle, which we demonstrate to require exponential-size proofs in modal resolution. Finally, we compare modal resolution to the modal Frege systems of Hrubeš (Ann Pure Appl Log 157(2–3):194–205, 2009) and obtain a “genuinely” modal separation.


Author(s):  
Koen Claessen ◽  
Ann Lillieström

AbstractWe present a number of alternative ways of handling transitive binary relations that commonly occur in first-order problems, in particular equivalence relations, total orders, and transitive relations in general. We show how such relations can be discovered syntactically in an input theory, and how they can be expressed in alternative ways. We experimentally evaluate different such ways on problems from the TPTP, using resolution-based reasoning tools as well as instance-based tools. Our conclusions are that (1) it is beneficial to consider different treatments of binary relations as a user, and that (2) reasoning tools could benefit from using a preprocessor or even built-in support for certain types of binary relations.


2021 ◽  
Author(s):  
Gaston K. Mazandu ◽  
Kenneth Opap ◽  
Funmilayo Makinde ◽  
Victoria Nembaware ◽  
Francis Agamah ◽  
...  

Abstract During the last decade, we witnessed an exponential rise of datasets from heterogeneous sources. Ontologies are playing an essential role in consistently describing domain concepts, data harmonization and integration to support large-scale integrative analysis and semantic interoperability in knowledge sharing. Several semantic similarity (SS) measures have been suggested to enable the integration of rich ontology structures into automated reasoning and inference. However, there is no tool that exhaustively implements these measures and existing tools are generally Gene Ontology specific, do not implement several models suggested in the WordNet context and are not equipped to properly deal with frequent ontology updates. We introduce a Python SS measure library (PySML), which tackles issues related to current SS tools, providing a portable and expandable tool to a broad computational audience. This empowers users to manipulate SS scores from several applications for any ontology version and file format. PySML is a flexible tool enabling the implementation of all existing semantic similarity models, resolving issues related to computation, reproducibility and re-usability of SS scores.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1964
Author(s):  
Zoltán Kovács ◽  
Tomas Recio ◽  
Luis F. Tabera ◽  
M. Pilar Vélez

We report, through different examples, the current development in GeoGebra, a widespread Dynamic Geometry software, of geometric automated reasoning tools by means of computational algebraic geometry algorithms. Then we introduce and analyze the case of the degeneracy conditions that so often arise in the automated deduction in geometry context, proposing two different ways for dealing with them. One is working with the saturation of the hypotheses ideal with respect to the ring of geometrically independent variables, as a way to globally handle the statement over all non-degenerate components. The second is considering the reformulation of the given hypotheses ideal—considering the independent variables as invertible parameters—and developing and exploiting the specific properties of this zero-dimensional case to analyze individually the truth of the statement over the different non-degenerate components.


Author(s):  
Paolo Morettin ◽  
Pedro Zuidberg Dos Martires ◽  
Samuel Kolb ◽  
Andrea Passerini

Real world decision making problems often involve both discrete and continuous variables and require a combination of probabilistic and deterministic knowledge. Stimulated by recent advances in automated reasoning technology, hybrid (discrete+continuous) probabilistic reasoning with constraints has emerged as a lively and fast growing research field. In this paper we provide a survey of existing techniques for hybrid probabilistic inference with logic and algebraic constraints. We leverage weighted model integration as a unifying formalism and discuss the different paradigms that have been used as well as the expressivity-efficiency trade-offs that have been investigated. We conclude the survey with a comparative overview of existing implementations and a critical discussion of open challenges and promising research directions.


Sign in / Sign up

Export Citation Format

Share Document