Relational Reasoning Using Neural Networks: A Survey

Author(s):  
Anil Audumbar Pise ◽  
Hima Vadapalli ◽  
Ian Sanders
Author(s):  
Luís C. Lamb ◽  
Artur d’Avila Garcez ◽  
Marco Gori ◽  
Marcelo O.R. Prates ◽  
Pedro H.C. Avelar ◽  
...  

Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNNs) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as their relationship to current developments in neural-symbolic computing.


Author(s):  
Jessica A. F. Thompson

Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. In order to discuss what constitutes scientific progress, one must have a goal in mind (progress towards what?). One such long term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Towards this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.


2021 ◽  
Author(s):  
Jessica Anne Farrell Thompson

Much of the controversy evoked by the use of deep neural networks (DNNs) as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. In order to discuss what constitutes scientific progress, one must have a goal in mind (progress towards what?). One such long term goal is to produce scientific explanations of intelligent capacities (e.g. object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. As such, I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Towards this vision, I review several of the most relevant theories of scientific explanation and begin to outline candidate forms of explanation for neural and cognitive phenomena.


2021 ◽  
Author(s):  
Guillermo Puebla ◽  
Jeffrey Bowers

Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that models based on the ResNet-50 architecture are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs drops substantially. This is true even when DCNNs' training regime is augmented with images from new versions of the same-different task or through multi-task learning on the test images. Furthermore, we show that the Relation Network, a deep learning architecture specifically designed to tackle visual relational reasoning problems, suffers the same kind of limitations than ResNet-50 classifiers.


2019 ◽  
Vol 42 ◽  
Author(s):  
Daniel J. Povinelli ◽  
Gabrielle C. Glorioso ◽  
Shannon L. Kuznar ◽  
Mateja Pavlic

Abstract Hoerl and McCormack demonstrate that although animals possess a sophisticated temporal updating system, there is no evidence that they also possess a temporal reasoning system. This important case study is directly related to the broader claim that although animals are manifestly capable of first-order (perceptually-based) relational reasoning, they lack the capacity for higher-order, role-based relational reasoning. We argue this distinction applies to all domains of cognition.


2020 ◽  
Vol 36 (2) ◽  
pp. 296-302 ◽  
Author(s):  
Luke J. Hearne ◽  
Damian P. Birney ◽  
Luca Cocchi ◽  
Jason B. Mattingley

Abstract. The Latin Square Task (LST) is a relational reasoning paradigm developed by Birney, Halford, and Andrews (2006) . Previous work has shown that the LST elicits typical reasoning complexity effects, such that increases in complexity are associated with decrements in task accuracy and increases in response times. Here we modified the LST for use in functional brain imaging experiments, in which presentation durations must be strictly controlled, and assessed its validity and reliability. Modifications included presenting the components within each trial serially, such that the reasoning and response periods were separated. In addition, the inspection time for each LST problem was constrained to five seconds. We replicated previous findings of higher error rates and slower response times with increasing relational complexity and observed relatively large effect sizes (η2p > 0.70, r > .50). Moreover, measures of internal consistency and test-retest reliability confirmed the stability of the LST within and across separate testing sessions. Interestingly, we found that limiting the inspection time for individual problems in the LST had little effect on accuracy relative to the unconstrained times used in previous work, a finding that is important for future brain imaging experiments aimed at investigating the neural correlates of relational reasoning.


Sign in / Sign up

Export Citation Format

Share Document