Connections Between Artificial Intelligence and Computational Complexity and the Complexity of Graphs

2013 ◽  
pp. 17-40
Author(s):  
Ángel Garrido
Author(s):  
Stephen M. Majercik

Stochastic satisfiability (SSAT) is an extension of satisfiability (SAT) that merges two important areas of artificial intelligence: logic and probabilistic reasoning. Initially suggested by Papadimitriou, who called it a “game against nature”, SSAT is interesting both from a theoretical perspective–it is complete for PSPACE, an important complexity class–and from a practical perspective–a broad class of probabilistic planning problems can be encoded and solved as SSAT instances. This chapter describes SSAT and its variants, their computational complexity, applications of SSAT, analytical results, algorithms and empirical results, related work, and directions for future work.


Author(s):  
Joao Teixeira

I examine some recent controversies involving the possibility of mechanical simulation of mathematical intuition. The first part is concerned with a presentation of the Lucas-Penrose position and recapitulates some basic logical conceptual machinery (Gödel's proof, Hilbert's Tenth Problem and Turing's Halting Problem). The second part is devoted to a presentation of the main outlines of Complexity Theory as well as to the introduction of Bremermann's notion of transcomputability and fundamental limit. The third part attempts to draw a connection/relationship between Complexity Theory and undecidability focusing on a new revised version of the Lucas-Penrose position in light of physical a priori limitations of computing machines. Finally, the last part derives some epistemological/philosophical implications of the relationship between Gödel's incompleteness theorem and Complexity Theory for the mind/brain problem in Artificial Intelligence and discusses the compatibility of functionalism with a materialist theory of the mind.


Author(s):  
HAI-YEN HAU

Shafer’s theory of evidence has been used to deal with uncertainty in many artificial intelligence applications. In this paper, we will show that in a hierarchically structured hypotheses space, any belief function whose focal elements are nodes in the hierarchy is a separable support function. We propose an algorithm that decomposes such separable support function into simple support functions. It is shown that the computational complexity of this decomposition algorithm is O(N2). Applications of the decomposition of separable support functions to the data fusion problem and reasoning about the control problem is discussed.


2003 ◽  
Vol 358 (1435) ◽  
pp. 1293-1309 ◽  
Author(s):  
Jean-Daniel Zucker

In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed.


2013 ◽  
Vol 09 (02) ◽  
pp. 183-205 ◽  
Author(s):  
L. I. PERLOVSKY ◽  
R. ILIN

Computing with words, CWW, is considered in the context of natural language functioning, unifying language with thinking. Previous attempts at modeling natural languages as well as thinking processes in artificial intelligence have met with computational complexity. To overcome computational complexity we use dynamic logic (DL), an extension of fuzzy logic describing fuzzy to crisp transitions. We suggest a possible architecture motivated by mathematical and neural considerations. We discuss the reasons why CWW has to be modeled jointly with thinking and propose an architecture consistent with brain neural structure and with a wealth of psychological knowledge. The proposed architecture implies the existence of relationships between languages and cultures. We discuss these implications for further evolution of English and Chinese cultures, and for cultural effects of interactions between natural languages and CWW.


Author(s):  
Przemysław Andrzej Wałęga

Temporal reasoning constitutes one of the main topics within the field of Artificial Intelligence. Particularly interesting are interval-based methods, in which time intervals are treated as basic ontological objects, in opposite to point-based methods, where time-points are considered as basic. The former approach is more expressive and seems to be more appropriate for such applications as natural language analysis or real time processes verification. My research concerns the classical interval-based logic, namely Halpern-Shoham logic (HS). In particular, my investigation continues recently proposed search for well-behaved - i.e., expressive enough for practical applications and of low computational complexity - HS fragments obtained by imposing syntactical restrictions on the usage of propositional connectives in their languages.


Author(s):  
Tashfin Ansari ◽  
Dr. Almas Siddiqui ◽  
Awasthi G. K

Artificial Intelligence (AI) and Machine Learning (ML), which are becoming a part of interest rapidly for various researchers. ML is the field of Computer Science study, which gives capability to learn without being absolutely programmed. This work focuses on the standard k-means clustering algorithm and analysis the shortcomings of the standard k-means algorithm. The k-means clustering algorithm calculates the distance between each data object and not all cluster centres in every iteration, which makes the efficiency of clustering is high. In this work, we have to try to improve the k-means algorithm to solve simple data to store some information in every iteration, which is to be used in the next interaction. This method avoids computing distance of data object to the cluster centre repeatedly, saving the running time. An experimental result shows the enhanced speed of clustering, accuracy, reducing the computational complexity of the k-means. In this, we have work on iris dataset extracted from Kaggle.


2011 ◽  
pp. 83-93
Author(s):  
Rita M.R. Pizzi

The advances of artificial intelligence (AI) have renewed the interest in the mind-body problem, the ancient philosophical debate on the nature of mind and its relationship with the brain. The new version of the mind-body problem concerns the relationship between computational complexity and self-aware thought. The traditional controversy between strong and weak AI will not be settled until we are able in the future to build a robot so evolved to give us the possibility to verify its perceptions, its qualitative sensations, and its introspective thoughts. However, an alternative way can be followed: The progresses of micro-, nano-, and biotechnologies allow us to create the first bionic creatures, composed of biological cells connected to electronic devices. Creating an artificial brain with a biological structure could allow verifying if it possesses peculiar properties with respect to an electronic one, comparing them at the same level of complexity.


Philosophies ◽  
2020 ◽  
Vol 5 (4) ◽  
pp. 37
Author(s):  
Attila Egri-Nagy ◽  
Antti Törmänen

The game of Go was the last great challenge for artificial intelligence in abstract board games. AlphaGo was the first system to reach supremacy, and subsequent implementations further improved the state of the art. As in chess, the fall of the human world champion did not lead to the end of the game. Now, we have renewed interest in the game due to new questions that emerged in this development. How far are we from perfect play? Can humans catch up? How compressible is Go knowledge? What is the computational complexity of a perfect player? How much energy is really needed to play the game optimally? Here, we investigate these and related questions with respect to the special properties of Go (meaningful draws and extreme combinatorial complexity). Since traditional board games have an important role in human culture, our analysis is relevant in a broader context. What happens in the game world could forecast our relationship with AI entities, their explainability, and usefulness.


Sign in / Sign up

Export Citation Format

Share Document