The Nature of Information, Semantics, and Effectiveness for Artificial Intelligence and Cognition
This manuscript puts forward claims to help address foundational gaps in understanding Cognition and Artificial General Intelligence (AGI), including the nature of Emergence, Semantics, and Information. This includes criteria for assessing true understanding in AI models. How symbolic reasoning conceptualizes phenomena is described. Without a subsymbolic perceptual level to generate concepts, there is no symbol grounding. Grounding requires dynamics outside of its own symbolization. Grounding forms the set of symbols used at the conceptual level. It is claimed that this role explains Semantics. This approach naturally leads to established research on Conceptual Spaces and has implications for Semantic Vector Spaces learned via Neural Embedding methods. It also has implications for Information Theories. A claim is made that Semantic Processes form Shannon-like microstates and macrostates, while Effective Processes constrain Semantic Processes. Unlike existing Semantic Information Theories, Semantic Processes are pre-informational. The claims provide perspective on the Mind. It is natural to conflate percepts with the modified version necessarily created when conceptualizing through explication. The ‘Hard Problem of Consciousness’ is related to this Percept/Concept distinction. Concepts are always subject to Eliminative Materialism. The nonconceptual properties of Percepts cannot be eliminated. Intrinsic are Extrinsic Emergence are distinguished. It is common to assume extrinsic emergent properties are intrinsic to the systems evoking them. This presents a challenge for proving intrinsic emergence in AI. However, criteria are proposed for claiming a theoretical system intrinsically processes information and grounds symbols. By leveraging the functional properties of Grounding, the criteria can be considered for actual systems.