Conscious Mind, Resonant Brain
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 17)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780190070557, 9780190070588

Author(s):  
Stephen Grossberg

The distinction between seeing and knowing, and why our brains even bother to see, are discussed using vivid perceptual examples, including image features without visible qualia that can nonetheless be consciously recognized, The work of Helmholtz and Kanizsa exemplify these issues, including examples of the paradoxical facts that “all boundaries are invisible”, and that brighter objects look closer. Why we do not see the big holes in, and occluders of, our retinas that block light from reaching our photoreceptors is explained, leading to the realization that essentially all percepts are visual illusions. Why they often look real is also explained. The computationally complementary properties of boundary completion and surface filling-in are introduced and their unifying explanatory power is illustrated, including that “all conscious qualia are surface percepts”. Neon color spreading provides a vivid example, as do self-luminous, glary, and glossy percepts. How brains embody general-purpose self-organizing architectures for solving modal problems, more general than AI algorithms, but less general than digital computers, is described. New concepts and mechanisms of such architectures are explained, including hierarchical resolution of uncertainty. Examples from the visual arts and technology are described to illustrate them, including paintings of Baer, Banksy, Bleckner, da Vinci, Gene Davis, Hawthorne, Hensche, Matisse, Monet, Olitski, Seurat, and Stella. Paintings by different artists and artistic schools instinctively emphasize some brain processes over others. These choices exemplify their artistic styles. The role of perspective, T-junctions, and end gaps are used to explain how 2D pictures can induce percepts of 3D scenes.


Author(s):  
Stephen Grossberg

A historical overview is given of interdisciplinary work in physics and psychology by some of the greatest nineteenth-century scientists, and why the fields split, leading to a century of ferment before the current scientific revolution in mind-brain sciences began to understand how we autonomously adapt to a changing world. New nonlinear, nonlocal, and nonstationary intuitions and laws are needed to understand how brains make minds. Work of Helmholtz on vision illustrates why he left psychology. His concept of unconscious inference presaged modern ideas about learning, expectation, and matching that this book scientifically explains. The fact that brains are designed to control behavioral success has profound implications for the methods and models that can unify mind and brain. Backward learning in time, and serial learning, illustrate why neural networks are a natural language for explaining brain dynamics, including the correct functional stimuli and laws for short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM) traces. In particular, brains process spatial patterns of STM and LTM, not just individual traces. A thought experiment leads to universal laws for how neurons, and more generally all cellular tissues, process distributed STM patterns in cooperative-competitive networks without experiencing contamination by noise or pattern saturation. The chapter illustrates how thinking this way leads to unified and principled explanations of huge databases. A brief history of the advantages and disadvantages of the binary, linear, and continuous-nonlinear sources of neural models is described, and how models like Deep Learning and the author’s contributions fit into it.


Author(s):  
Stephen Grossberg

This far-ranging chapter provides unified explanations of data about audition, speech, and language, and the general cognitive processes that they specialize. The ventral What stream and dorsal Where cortical stream in vision have analogous ventral sound-to-meaning and dorsal sound-to-action streams in audition. Circular reactions for learning to reach using vision are homologous to circular reactions for learning to speak using audition. VITE circuits control arm movement properties of synergy, synchrony, and speed. Volitional basal ganglia GO signals choose which limb to move and how fast it moves. VAM models use a circular reaction to calibrate VITE circuit signals. VITE is joined with the FLETE model to compensate for variable loads, unexpected perturbations, and obstacles. Properties of cells in cortical areas 4 and 5, spinal cord, and cerebellum are quantitatively simulated. Motor equivalent reaching using clamped joints or tools arises from circular reactions that learn representations of space around an actor. Homologous circuits model motor-equivalent speech production, including coarticulation. Stream-shroud resonances play the role for audition that surface-shroud resonances play in vision. They support auditory consciousness and speech production. Strip maps and spectral-pitch resonances cooperate to solve the cocktail party problem whereby humans track voices of speakers in noisy environments with multiple sources. Auditory streaming and speaker normalization use networks with similar designs. Item-Order-Rank working memories and Masking Field networks temporarily store sequences of events while categorizing them into list chunks. Analog numerical representations and place-value number systems emerge from phylogenetically earlier Where and What stream spatial and categorical processes.


Author(s):  
Stephen Grossberg

Multiple paradoxical visual percepts are explained using boundary completion and surface filling-in properties, including discounting the illuminant; brightness constancy, contrast, and assimilation; the Craik-O’Brien-Cornsweet Effect; and Glass patterns. Boundaries act as both generators and barriers to filling-in using specific cooperative and competitive interactions. Oriented local contrast detectors, like cortical simple cells, create uncertainties that are resolved using networks of simple, complex, and hypercomplex cells, leading to unexpected insights such as why Roman typeface letter fonts use serifs. Further uncertainties are resolved by interactions with bipole grouping cells. These simple-complex-hypercomplex-bipole networks form a double filter and grouping network that provides unified explanations of texture segregation, hyperacuity, and illusory contour strength. Discounting the illuminant suppresses illumination contaminants so that feature contours can hierarchically induce surface filling-in. These three hierarchical resolutions of uncertainty explain neon color spreading. Why groupings do not penetrate occluding objects is explained, as are percepts of DaVinci stereopsis, the Koffka-Benussi and Kanizsa-Minguzzi rings, and pictures of graffiti artists and Mooney faces. The property of analog coherence is achieved by laminar neocortical circuits. Variations of a shared canonical laminar circuit have explained data about vision, speech, and cognition. The FACADE theory of 3D vision and figure-ground separation explains much more data than a Bayesian model can. The same cortical process that assures consistency of boundary and surface percepts, despite their complementary laws, also explains how figure-ground separation is triggered. It is also explained how cortical areas V2 and V4 regulate seeing and recognition without forcing all occluders to look transparent.


Author(s):  
Stephen Grossberg

This chapter explains fundamental differences between seeing and recognition, notably how and why our brains use conscious seeing to control actions like looking and reaching, while we learn both view-, size-, and view-specific object recognition categories, and view-, size-, and position-invariant object recognition categories, as our eyes search a scene during active vision. The dorsal Where cortical stream and the ventral What cortical stream interact to regulate invariant category learning by solving the View-to-Object Binding problem whereby inferotemporal, or IT, cortex associates only views of a single object with its learned invariant category. Feature-category resonances between V2/V4 and IT support category recognition. Symptoms of visual agnosia emerge when IT is lesioned. V2 and V4 interact to enable amodal completion of partially occluded objects behind their occluders, without requiring that all occluders look transparent. V4 represents the unoccluded surfaces of opaque objects and triggers a surface-shroud resonance with posterial parietal cortex, or PPC, that renders surfaces consciously visible, and enables them to control actions. Clinical symptoms of visual neglect emerge when PPC is lesioned. A unified explanation is given of data about visual crowding, situational awareness, change blindness, motion-induced blindness, visual search, perceptual stability, and target swapping. Although visual boundaries and surfaces obey computationally complementary laws, feedback between boundaries and surfaces ensure their consistency and initiate figure-ground separation, while commanding our eyes to foveate sequences of salient features on object surfaces, and thereby triggering invariant category learning. What-to-Where stream interactions enable Where’s Waldo searches for desired objects in cluttered scenes.


Author(s):  
Stephen Grossberg

This chapter explains how humans and other animals learn to learn to navigate in space. Both reaching and route-based navigation use difference vector computations. Route navigation learns a labeled graph of angles and distances moved. Spatial navigation requires neurons to learn navigable spaces that can be many meters in size. This is again accomplished by a spectrum of cells. Such spectral spacing supports learning of medial entorhinal grid cells and hippocampal place cells. The model responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales, and place cells with one or more firing fields, that match neurophysiological data about their development in juvenile rats. Both grid and place cells develop in a hierarchy of self-organizing maps by detecting, learning and remembering the most frequent and energetic co-occurrences of their inputs. Model parsimonious properties include: similar ring attractor mechanisms process linear and angular path integration inputs that drive map learning; the same self-organizing map mechanisms can learn both grid cell and place cell receptive fields; and the learning of the dorsoventral organization of multiple grid cell modules through medial entorhinal cortex to hippocampus uses a gradient of rates that is homologous to a rate gradient that drives adaptively timed learning at multiple rates through lateral entorhinal cortex to hippocampus (‘neural relativity’). The model clarifies how top-down hippocampal-to-entorhinal ART attentional mechanisms stabilize map learning, simulates how hippocampal, septal, or acetylcholine inactivation disrupts grid cells, and explains data about theta, beta and gamma oscillations.


Author(s):  
Stephen Grossberg

This chapter begins an analysis of how we see changing visual images and scenes. It explains why moving objects do not create unduly persistent trails, or streaks, of persistent visual images that could interfere with our ability to see what is there after they pass by. It does so by showing how the circuits already described for static visual form perception automatically reset themselves in response to changing visual cues, and thereby prevent undue persistence, when they are augmented with habituative transmitter gates, or MTM traces. The MTM traces gate specific connections among the hypercomplex cells that control completion of static boundaries. These MTM-gated circuits embody gated dipoles whose rebound properties autonomically reset boundaries at appropriate times in response to changing visual inputs. A tradeoff between boundary resonance and reset is clarified by this analysis. This kind of resonance and reset cycle shares many properties with the resonance and reset cycle that controls the learning of recognition categories in Adaptive Resonance Theory. The MTM-gated circuits quantitatively explain the main properties of visual persistence that do occur, including persistence of real and illusory contours, persistence after offset of oriented adapting stimuli, and persistence due to spatial competition. Psychophysical data about afterimages and residual traces are also explained by the same mechanisms.


Author(s):  
Stephen Grossberg

This final chapter discusses far-ranging implications of the discoveries that this book describes, including lessons about how to live more fulfilling lives, how perplexing aspects of the human condition arise, and how ethical value systems and religious beliefs are sustained. Principles of our brains’ self-organizing measurement process generalize to all cellular biological organisms, and are shaped by the physical world with which our brains ceaselessly communicate and adapt. In particular, our brains’ complementary computing, uncertainty principles, and resonance have analogs in the laws of the physical world that has shaped them. A universal computational code for mental life enables a lifetime of experiences to cohere in an emerging sense of self. Complementary computing and hierarchical resolution of uncertainty require conscious states to select effective actions, and thus actively engage us in the ceaseless brain-environment perception-cognition-emotion-action feedback loop that drives brain self-organization to adapt to a changing world. Actions that lead to errors can be corrected using cognitive and cognitive-emotional processes to discover a better understanding of environmental causes and the physical laws that shape them. Symmetry-breaking between approach and avoidance outcomes in cognition and emotion provides a biological basis for morality and religion, with positive emotions facilitating sustainable motivations and empathy, while also causing negative experiences like learned helplessness, self-punitive behaviors, fetishes, and the motivations to commit evil acts. A universal developmental code uses similar STM and LTM laws for brain development, adult learning, gastrulation, organ size increases that preserve tissue form, Hydra regeneration, slime mold aggregation, and Rhodnius cuticles.


Author(s):  
Stephen Grossberg

The cerebral cortex computes the highest forms of biological intelligence in all sensory and cognitive modalities. Neocortical cells are organized into circuits that form six cortical layers in all cortical areas that carry out perception and cognition. Variations in cell properties within these layers and their connections have been used to classify the cerebral cortex into more than fifty divisions, or areas, to which distinct functions have been attributed. Why the cortex has a laminar organization for the control of behavior has, however, remained a mystery until recently. Also mysterious has been how variations on this ubiquitous laminar cortical design can give rise to so many different types of intelligent behavior. This chapter explains how Laminar Computing contributes to biological intelligence, and how layered circuits of neocortical cells support all the various kinds of higher-order biological intelligence, including vision, language, and cognition, using variations of the same canonical laminar circuit. This canonical circuit can be used in general-purpose VLSI chips that can be specialized to carry out different kinds of biological intelligence, and seamlessly joined together to control autonomous adaptive algorithms and mobile robots. These circuits show how preattentive automatic bottom-up processing and attentive task-selective top-down processing are joined together in the deeper cortical layers to form a decision interface. Here, bottom-up and top-down constraints cooperate and compete to generate the best decisions, by combining properties of fast feedforward and feedback processing, analog and digital computing, and preattentive and attentive learning, including laminar ART properties such as analog coherence.


Author(s):  
Stephen Grossberg

This chapter explains how humans and other animals learn to adaptively time their behaviors to match external environmental constraints. It hereby explains how nerve cells learn to bridge big time intervals of hundreds of milliseconds or even several seconds, and thereby associate events that are separated in time. This is accomplished by a spectrum of cells that each respond in overlapping time intervals and whose population response can bridge intervals much larger than any individual cell can. Such spectral timing occurs in circuits that include the lateral entorhinal cortex and hippocampal cortex. Trace conditioning, in which CS and US are separated in time, requires the hippocampus, whereas delay conditioning, in which they overlap, does not. The Weber law observed in trace conditioning naturally emerges from spectral timing dynamics, as later confirmed by data about hippocampal time cells. Hippocampal adaptive timing enables a cognitive-emotional resonance to be sustained long enough to become conscious of its feeling and its causal event, and to support BDNF-modulated memory consolidation. Spectral timing supports balanced exploratory and consummatory behaviors whereby restless exploration for immediate gratification is replaced by adaptively timed consummation. During expected disconfirmations of reward, orienting responses are inhibited until an adaptively timed response is released. Hippocampally-mediated incentive motivation supports timed responding via the cerebellum. mGluR regulates adaptive timing in hippocampus, cerebellum, and basal ganglia. Breakdowns of mGluR and dopamine modulation cause symptoms of autism and Fragile X syndrome. Inter-personal circular reactions enable social cognitive capabilities, including joint attention and imitation learning, to develop.


Sign in / Sign up

Export Citation Format

Share Document