Commonsense Reasoning Using Theorem Proving and Machine Learning

Author(s):  
Sophie Siebert ◽  
Claudia Schon ◽  
Frieder Stolzenburg
Author(s):  
Xenia Naidenova

The purpose of this chapter is to demonstrate the possibility of transforming a large class of machine learning algorithms into commonsense reasoning processes based on using well-known deduction and induction logical rules. The concept of a good classification (diagnostic) test for a given set of positive examples lies in the basis of our approach to the machine learning problems. The task of inferring all good diagnostic tests is formulated as searching the best approximations of a given classification (a partitioning) on a given set of examples. The lattice theory is used as a mathematical language for constructing good classification tests. The algorithms of good tests inference are decomposed into subtasks and operations that are in accordance with main human commonsense reasoning rules.


10.29007/lt5p ◽  
2019 ◽  
Author(s):  
Sophie Siebert ◽  
Frieder Stolzenburg

Commonsense reasoning is an everyday task that is intuitive for humans but hard to implement for computers. It requires large knowledge bases to get the required data from, although this data is still incomplete or even inconsistent. While machine learning algorithms perform rather well on these tasks, the reasoning process remains a black box. To close this gap, our system CoRg aims to build an explainable and well-performing system, which consists of both an explainable deductive derivation process and a machine learning part. We conduct our experiments on the Copa question-answering benchmark using the ontologies WordNet, Adimen-SUMO, and ConceptNet. The knowledge is fed into the theorem prover Hyper and in the end the conducted models will be analyzed using machine learning algorithms, to derive the most probable answer.


Author(s):  
Abhishek Sharma ◽  
Keith M. Goolsbey

Cognitive systems must reason with large bodies of general knowledge to perform complex tasks in the real world. However, due to the intractability of reasoning in large, expressive knowledge bases (KBs), many AI systems have limited reasoning capabilities. Successful cognitive systems have used a variety of machine learning and axiom selection methods to improve inference. In this paper, we describe a search heuristic that uses a Monte-Carlo simulation technique to choose inference steps. We test the efficacy of this approach on a very large and expressive KB, Cyc. Experimental results on hundreds of queries show that this method is highly effective in reducing inference time and improving question-answering (Q/A) performance.


2014 ◽  
Vol 53 (2) ◽  
pp. 141-172 ◽  
Author(s):  
James P. Bridge ◽  
Sean B. Holden ◽  
Lawrence C. Paulson

Author(s):  
Xenia Naidenova

One of the most important tasks in database technology is to combine the following activities: data mining or inferring knowledge from data and query processing or reasoning on acquired knowledge. The solution of this task requires a logical language with unified syntax and semantics for integrating deductive (using knowledge) and inductive (acquiring knowledge) reasoning. In this paper, we propose a unified model of commonsense reasoning. We also demonstrate that a large class of inductive machine learning (ML) algorithms can be transformed into the commonsense reasoning processes based on wellknown deduction and induction logical rules. The concept of a good classification (diagnostic) test (Naidenova & Polegaeva, 1986) is the basis of our approach to combining deductive and inductive reasoning.


Sign in / Sign up

Export Citation Format

Share Document