symbol grounding problem
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 7)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Masataro Asai ◽  
Hiroshi Kajino ◽  
Alex Fukunaga ◽  
Christian Muise

Symbolic systems require hand-coded symbolic representation as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems. To address the gap between the two fields, one has to solve Symbol Grounding problem: The question of how a machine can generate symbols automatically. We discuss our recent work called Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We discuss several key ideas that made Latplan possible which would hopefully extend to many other symbolic paradigms outside classical planning.


2020 ◽  
Vol 30 (3) ◽  
pp. 325-347
Author(s):  
Holger Lyre

Abstract The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is recognized as generalization, the possibility to go over from specific to more general types of problems. A third dimension is semantic grounding. Our overall analysis connects to a number of known foundational issues in the philosophy of mind and cognition: the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and use theories of meaning. It shall finally be argued that the dimension of grounding decomposes into three sub-dimensions. And the dimension of self-learning turns out as only one of a whole range of “self-x-capacities” (based on ideas of organic computing) that span the self-x-subspace of the full AI state space.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 175 ◽  
Author(s):  
Eric Dietrich ◽  
Chris Fields

The open-domain Frame Problem is the problem of determining what features of an open task environment need to be updated following an action. Here we prove that the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable. We discuss two other open-domain problems closely related to the Frame Problem, the system identification problem and the symbol-grounding problem, and show that they are similarly undecidable. We then reformulate the Frame Problem as a quantum decision problem, and show that it is undecidable by any finite quantum computer.


2020 ◽  
Vol 07 (01) ◽  
pp. 73-82
Author(s):  
Pentti O. A. Haikonen

The popular expectation is that Artificial Intelligence (AI) will soon surpass the capacities of the human mind and Strong Artificial General Intelligence (AGI) will replace the contemporary Weak AI. However, there are certain fundamental issues that have to be addressed before this can happen. There can be no intelligence without understanding, and there can be no understanding without getting meanings. Contemporary computers manipulate symbols without meanings, which are not incorporated in the computations. This leads to the Symbol Grounding Problem; how could meanings be incorporated? The use of self-explanatory sensory information has been proposed as a possible solution. However, self-explanatory information can only be used in neural network machines that are different from existing digital computers and traditional multilayer neural networks. In humans, self-explanatory information has the form of qualia. To have reportable qualia is to be phenomenally conscious. This leads to the hypothesis about an unavoidable connection between the solution of the Symbol Grounding Problem and consciousness. If, in general, self-explanatory information equals to qualia, then machines that utilize self-explanatory information would be conscious.


2019 ◽  
Author(s):  
Stevan Harnad

Brette (2019) criticizes the notion of neural coding because it seems to entail that neural signals need to be “decoded” by or for some receiver in the head. If that were so, then neural coding would indeed be homuncular (Brette calls it “dualistic”), requiring an entity to decipher the code. But I think Brette’s plea to think instead in terms of complex, interactive causal throughput is preaching to the converted. Turing (not Shannon) has already shown the way. In any case, the metaphor of neural coding has little to do with the symbol grounding problem.


2019 ◽  
Author(s):  
Simone Viganò ◽  
Valentina Borghesani ◽  
Manuela Piazza

2019 ◽  
Vol 42 ◽  
Author(s):  
Stevan Harnad

Abstract Brette criticizes the notion of neural coding because it seems to entail that neural signals need to “decoded” by or for some receiver in the head. If that were so, then neural coding would indeed be homuncular (Brette calls it “dualistic”), requiring an entity to decipher the code. But I think Brette's plea to think instead in terms of complex, interactive causal throughput is preaching to the converted. Turing (not Shannon) has already shown the way. In any case, the metaphor of neural coding has little to do with the symbol grounding problem.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

With the motivation to develop computational and algorithmic levels of understanding how the mind comes into being, this chapter considers computer science, artificial intelligence, and cognitive systems perspectives. Questions are addressed, such as what ‘intelligence’ may actually be and how, and when an artificial system may be considered to be intelligent and to have a mind on its own. May it even be alive? Out of these considerations, the chapter derives three fundamental problems for cognitive systems: the symbol grounding problem, the frame problem, and the binding problem. We show that symbol-processing artificial systems cannot solve these problems satisfactorily. Neural networks and embodied systems offer alternatives. Moreover, biological observations and studies with embodied robotic systems imply that behavioral capabilities can foster and facilitate the development of suitably abstracted, symbolic structures. We finally consider Alan Turing’s question “Can machines think?” and emphasize that such machines must at least solve the three considered fundamental cognitive systems problems. The rest of the book addresses how the human brain, equipped with a suitably-structured body and body–brain interface, manages to solve these problems, and thus manages to develop a mind.


Sign in / Sign up

Export Citation Format

Share Document