symbol grounding
Recently Published Documents


TOTAL DOCUMENTS

130
(FIVE YEARS 12)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Masataro Asai ◽  
Hiroshi Kajino ◽  
Alex Fukunaga ◽  
Christian Muise

Symbolic systems require hand-coded symbolic representation as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems. To address the gap between the two fields, one has to solve Symbol Grounding problem: The question of how a machine can generate symbols automatically. We discuss our recent work called Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We discuss several key ideas that made Latplan possible which would hopefully extend to many other symbolic paradigms outside classical planning.


2021 ◽  
Vol 6 ◽  
Author(s):  
Lei Yuan ◽  
Richard Prather ◽  
Kelly Mix ◽  
Linda Smith

Very few questions have cast such an enduring effect in cognitive science as the question of “symbol-grounding”: Do human-invented symbol systems have to be grounded to physical objects to gain meanings? This question has strongly influenced research and practice in education involving the use of physical models and manipulatives. However, the evidence on the effectiveness of physical models is mixed. We suggest that rethinking physical models in terms of analogies, rather than groundings, offers useful insights. Three experiments with 4- to 6-year-old children showed that they can learn about how written multi-digit numbers are named and how they are used to represent relative magnitudes based on exposure to either a few pairs of written multi-digit numbers and their corresponding names, or exposure to multi-digit number names and their corresponding physical models made up by simple shapes (e.g., big-medium-small discs); but they failed to learn with traditional mathematical manipulatives (i.e., base-10 blocks, abacus) that provide a more complete grounding of the base-10 principles. These findings have implications for place value instruction in schools and for the determination of principles to guide the use of physical models.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 175 ◽  
Author(s):  
Eric Dietrich ◽  
Chris Fields

The open-domain Frame Problem is the problem of determining what features of an open task environment need to be updated following an action. Here we prove that the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable. We discuss two other open-domain problems closely related to the Frame Problem, the system identification problem and the symbol-grounding problem, and show that they are similarly undecidable. We then reformulate the Frame Problem as a quantum decision problem, and show that it is undecidable by any finite quantum computer.


2020 ◽  
Vol 07 (01) ◽  
pp. 73-82
Author(s):  
Pentti O. A. Haikonen

The popular expectation is that Artificial Intelligence (AI) will soon surpass the capacities of the human mind and Strong Artificial General Intelligence (AGI) will replace the contemporary Weak AI. However, there are certain fundamental issues that have to be addressed before this can happen. There can be no intelligence without understanding, and there can be no understanding without getting meanings. Contemporary computers manipulate symbols without meanings, which are not incorporated in the computations. This leads to the Symbol Grounding Problem; how could meanings be incorporated? The use of self-explanatory sensory information has been proposed as a possible solution. However, self-explanatory information can only be used in neural network machines that are different from existing digital computers and traditional multilayer neural networks. In humans, self-explanatory information has the form of qualia. To have reportable qualia is to be phenomenally conscious. This leads to the hypothesis about an unavoidable connection between the solution of the Symbol Grounding Problem and consciousness. If, in general, self-explanatory information equals to qualia, then machines that utilize self-explanatory information would be conscious.


Author(s):  
Mia Šetić Beg ◽  
Jakov Čičko ◽  
Dražen Domijan

2019 ◽  
Author(s):  
Stevan Harnad

Brette (2019) criticizes the notion of neural coding because it seems to entail that neural signals need to be “decoded” by or for some receiver in the head. If that were so, then neural coding would indeed be homuncular (Brette calls it “dualistic”), requiring an entity to decipher the code. But I think Brette’s plea to think instead in terms of complex, interactive causal throughput is preaching to the converted. Turing (not Shannon) has already shown the way. In any case, the metaphor of neural coding has little to do with the symbol grounding problem.


Sign in / Sign up

Export Citation Format

Share Document