learning to reason
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 12)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Pasquale Minervini ◽  
Sebastian Riedel ◽  
Pontus Stenetorp ◽  
Edward Grefenstette ◽  
Tim Rocktäschel

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online. (At https://github.com/uclnlp/ctp)


Author(s):  
Ionela G Mocanu

Since knowledge engineering is an inherently challenging and somewhat unbounded task, machine learning has been widely proposed as an alternative. In real world scenarios, we often need to explicitly model multiple agents, where intelligent agents act towards achieving goals either by coordinating with the other agents or by overseeing the opponents moves, if in a competitive context. We consider the knowledge acquisition problem where agents have knowledge about the world and other agents and then acquire new knowledge (both about the world as well as other agents) in service of answering queries. We propose a model of implicit learning, or more generally, learning to reason, which bypasses the intractable step of producing an explicit representation of the learned knowledge. We show that polynomial-time learnability results can be obtained when limited to knowledge bases and observations consisting of conjunctions of modal literals.


Author(s):  
Yuqian Jiang

Despite recent progress in AI and robotics research, especially learned robot skills, there remain significant challenges in building robust, scalable, and general-purpose systems for service robots. This Ph.D. research aims to combine symbolic planning and reinforcement learning to reason about high-level robot tasks and adapt to the real world. We will introduce task planning algorithms that adapt to the environment and other agents, as well as reinforcement learning methods that are practical for service robot systems. Taken together, this work will make a significant step towards creating general-purpose service robots.


2021 ◽  
pp. 1-17
Author(s):  
Dor Abrahamson

What evolutionary account explains our capacity to reason mathematically? Identifying the biological provenance of mathematical thinking would bear on education, because we could then design learning environments that simulate ecologically authentic conditions for leveraging this universal phylogenetic inclination. The ancient mechanism coopted for mathematical activity, I propose, is our fundamental organismic capacity to improve our sensorimotor engagement with the environment by detecting, generating, and maintaining goal-oriented perceptual structures regulating action, whether actual or imaginary. As such, the phenomenology of grasping a mathematical notion is literally that – gripping the environment in a new way that promotes interaction. To argue for the plausibility of my thesis, I first survey embodiment literature to implicate cognition as constituted in perceptuomotor engagement. Then, I summarize findings from a design-based research project investigating relations between learning to move in new ways and learning to reason mathematically about these conceptual choreographies. As such, the project proposes educational implications of enactivist evolutionary biology.


2021 ◽  
Author(s):  
Casper Hansen ◽  
Christian Hansen ◽  
Lucas Chaves Lima
Keyword(s):  

2020 ◽  
Vol 34 (04) ◽  
pp. 3097-3104
Author(s):  
Ralph Abboud ◽  
Ismail Ceylan ◽  
Thomas Lukasiewicz

Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded. We conduct experiments to validate our method, and show that our model learns and generalizes very well to large-scale #DNF instances.


2020 ◽  
Author(s):  
Maxwell Forbes ◽  
Jena D. Hwang ◽  
Vered Shwartz ◽  
Maarten Sap ◽  
Yejin Choi

2020 ◽  
Author(s):  
Nikhil Verma ◽  
Abhishek Sharma ◽  
Dhiraj Madan ◽  
Danish Contractor ◽  
Harshit Kumar ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document