symbolic ai
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 19)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Luciano Serafini ◽  
Artur d’Avila Garcez ◽  
Samy Badreddine ◽  
Ivan Donadello ◽  
Michael Spranger ◽  
...  

The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.


2021 ◽  
Author(s):  
Adnan Darwiche

Tractable Boolean and arithmetic circuits have been studied extensively in AI for over two decades now. These circuits were initially proposed as “compiled objects,” meant to facilitate logical and probabilistic reasoning, as they permit various types of inference to be performed in linear time and a feed-forward fashion like neural networks. In more recent years, the role of tractable circuits has significantly expanded as they became a computational and semantical backbone for some approaches that aim to integrate knowledge, reasoning and learning. In this chapter, we review the foundations of tractable circuits and some associated milestones, while focusing on their core properties and techniques that make them particularly useful for the broad aims of neuro-symbolic AI.


2021 ◽  
Author(s):  
Robin Manhaeve ◽  
Giuseppe Marra ◽  
Thomas Demeester ◽  
Sebastijan Dumančić ◽  
Angelika Kimmig ◽  
...  

There is a broad consensus that both learning and reasoning are essential to achieve true artificial intelligence. This has put the quest for neural-symbolic artificial intelligence (NeSy) high on the research agenda. In the past decade, neural networks have caused great advances in the field of machine learning. Conversely, the two most prominent frameworks for reasoning are logic and probability. While in the past they were studied by separate communities, a significant number of researchers has been working towards their integration, cf. the area of statistical relational artificial intelligence (StarAI). Generally, NeSy systems integrate logic with neural networks. However, probability theory has already been integrated with both logic (cf. StarAI) and neural networks. It therefore makes sense to consider the integration of logic, neural networks and probabilities. In this chapter, we first consider these three base paradigms separately. Then, we look at the well established integrations, NeSy and StarAI. Next, we consider the integration of all three paradigms as Neural Probabilistic Logic Programming, and exemplify it with the DeepProbLog framework. Finally, we discuss the limitations of the state of the art, and consider future directions based on the parallels between StarAI and NeSy.


SoftwareX ◽  
2021 ◽  
Vol 16 ◽  
pp. 100817
Author(s):  
Giovanni Ciatto ◽  
Roberta Calegari ◽  
Andrea Omicini
Keyword(s):  

Computers ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 154
Author(s):  
Alfonso Ortega ◽  
Julian Fierrez ◽  
Aythami Morales ◽  
Zilong Wang ◽  
Marina de la Cruz ◽  
...  

Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity.


2021 ◽  
Vol 2042 (1) ◽  
pp. 012018
Author(s):  
Gilles Morel

Abstract Smart building and smart city specialists agree that complex, innovative use cases, especially those using cross-domain and multi-source data, need to make use of Artificial Intelligence (AI). However, today’s AI mainly concerns machine learning and artificial neural networks (deep learning), whereas the first forty years of the discipline (the last decades of the 20th century) were essentially focused on a knowledge-based approach, which is still relevant today for some tasks. In this article we advocate a merging of these two AI trends – an approach known as neuro-symbolic AI – for the smart city, and point the way towards a complete integration of the two technologies, compatible with standard software.


Author(s):  
Michael Sioutis ◽  
Diedrich Wolter

Qualitative Spatial & Temporal Reasoning (QSTR) is a major field of study in Symbolic AI that deals with the representation and reasoning of spatio- temporal information in an abstract, human-like manner. We survey the current status of QSTR from a viewpoint of reasoning approaches, and identify certain future challenges that we think that, once overcome, will allow the field to meet the demands of and adapt to real-world, dynamic, and time-critical applications of highly active areas such as machine learning and data mining.


Author(s):  
Yixin Zhang ◽  
Joe McCalmon ◽  
Ashley Peake ◽  
Sarra Alqahtani ◽  
Paul Pauca
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document