Vast Topological Learning and Sentient AGI

2021 ◽  
Vol 08 (01) ◽  
pp. 81-111
Author(s):  
Stephen L. Thaler

A novel form of neurocomputing allows machines to generate new concepts along with their anticipated consequences, all encoded as chained associative memories. Knowledge is accumulated by the system through direct experience as network chaining topologies form in response to various environmental input patterns. Thereafter, random disturbances to the connections joining these nets promote the formation of alternative chaining topologies representing novel concepts. The resulting ideational chains are then reinforced or weakened as they incorporate nets containing memories of impactful events or things. Such encodings of entities, actions, and relationships as geometric forms composed of artificial neural nets may well suggest how the human brain summarizes and appraises the states of nearly a hundred billion cortical neurons. It may also be the paradigm that allows the scaling of synthetic neural systems to brain-like proportions to achieve sentient artificial general intelligence (SAGI).

Science ◽  
1989 ◽  
Vol 243 (4890) ◽  
pp. 481-482 ◽  
Author(s):  
L Roberts
Keyword(s):  

2019 ◽  
Author(s):  
Jason A. Avery ◽  
Alexander G. Liu ◽  
John E. Ingeholm ◽  
Cameron D. Riddell ◽  
Stephen J. Gotts ◽  
...  

SUMMARYIn the mammalian brain, the insula is the primary cortical substrate involved in the perception of taste. Recent imaging studies in rodents have identified a gustotopic organization in the insula, whereby distinct insula regions are selectively responsive to one of the five basic tastes. However, numerous studies in monkeys have reported that gustatory cortical neurons are broadly-tuned to multiple tastes, and tastes are not represented in discrete spatial locations. Neuroimaging studies in humans have thus far been unable to discern between these two models, though this may be due to the relatively low spatial resolution employed in taste studies to date. In the present study, we examined the spatial representation of taste within the human brain using ultra-high resolution functional magnetic resonance imaging (MRI) at high magnetic field strength (7-Tesla). During scanning, participants tasted sweet, salty, sour and tasteless liquids, delivered via a custom-built MRI-compatible tastant-delivery system. Our univariate analyses revealed that all tastes (vs. tasteless) activated primary taste cortex within the bilateral dorsal mid-insula, but no brain region exhibited a consistent preference for any individual taste. However, our multivariate searchlight analyses were able to reliably decode the identity of distinct tastes within those mid-insula regions, as well as brain regions involved in affect and reward, such as the striatum, orbitofrontal cortex, and amygdala. These results suggest that taste quality is not represented topographically, but by a combinatorial spatial code, both within primary taste cortex as well as regions involved in processing the hedonic and aversive properties of taste.


1995 ◽  
Vol 74 (3) ◽  
pp. 1167-1178 ◽  
Author(s):  
D. Regan ◽  
P. He

1. We searched for a neurophysical correlate of preattentive texture discrimination by recording magnetic and electric evoked responses from the human brain during the first few hundred milliseconds following the presentation of texture-defined (TD) checkerboard form. The only two textons that changed when the TD checkerboard appeared or disappeared were the local orientation and line termination textons. (Textons are conspicuous local features within a texture pattern). 2. Our evidence that the magnetic response to TD form cannot be explained in terms of responses to the two associated textons is as follows: 1) by dissociating the two responses we showed that the magnetic response to TD form is almost entirely independent of the magnetic response to the local orientation texton; 2) a further distinction between the two responses is that their distributions over the head are different; and 3) the magnetic response to TD form differs from the magnetic response to the line termination texton in both distribution over the head and waveform. We conclude that this evidence identifies the existence of a brain response correlate of preattentive texture discrimination. 3. We also recorded brain responses to luminance-defined (LD) checkerboard form. Our grounds for concluding that magnetic brain responses to the onset of checkerboard form are generated by different and independent neural systems for TD and LD form are as follows: 1) magnetic responses to the onset of TD form and LD form had different distributions over the skull, had different waveforms, and depended differently on check size; and 2) the waveform of the response to superimposed TD and LD checks closely approximated the linear sum of responses to TD checks and LD checks alone. 4. One possible explanation for the observed differences between the magnetic and electric evoked responses is that responses to both onset and offset of TD form predominantly involve neurons aligned parallel to the skull, whereas that is not the case for responses to LD form.


1997 ◽  
Vol 9 (2) ◽  
pp. 279-304 ◽  
Author(s):  
Wolfgang Maass

We show that networks of relatively realistic mathematical models for biological neurons in principle can simulate arbitrary feedforward sigmoidal neural nets in a way that has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons) rather than on the traditional interpretation of analog variables in terms of firing rates. The resulting new simulation is substantially faster and hence more consistent with experimental results about the maximal speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are “universal approximators” in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. This new proposal for the possible organization of computations in networks of spiking neurons systems has some interesting consequences for the type of learning rules that would be needed to explain the self-organization of such networks. Finally, the fast and noise-robust implementation of sigmoidal neural nets by temporal coding points to possible new ways of implementing feedforward and recurrent sigmoidal neural nets with pulse stream VLSI.


NeuroImage ◽  
2005 ◽  
Vol 26 (3) ◽  
pp. 965-972 ◽  
Author(s):  
Rex E. Jung ◽  
Richard J. Haier ◽  
Ronald A. Yeo ◽  
Laura M. Rowland ◽  
Helen Petropoulos ◽  
...  

1992 ◽  
Vol 4 (4) ◽  
pp. 299-300 ◽  
Author(s):  
Gordon G. Globus

The near universally accepted theory that the brain processes information persists in current neural network theory where there is "subsymbolic" computation (Smolensky, 1988) on distributed representations. This theory of brain information processing may suffice for simplifying models simulated in silicon but not for living neural nets where there is ongoing chemical tuning of the input/output transfer function at the nodes, connection weights, network parameters, and connectivity. Here the brain continually changes itself as it intersects with information from the outside. An alternative theory to information processing is developed in which the brain permits and supports "participation" of self and other as constraints on the dynamically evolving, self-organizing whole. The noncomputational process of "differing and deferring" in nonlinear dynamic neural systems is contrasted with Black's (1991) account of molecular information processing. State hyperspace for the noncomputational process of nonlinear dynamical systems, unlike classical systems, has a fractal dimension. The noncomputational model is supported by suggestive evidence for fractal properties of the brain.


2008 ◽  
Vol 17 (03) ◽  
pp. 555-567 ◽  
Author(s):  
STEVEN GUTSTEIN ◽  
OLAC FUENTES ◽  
ERIC FREUDENTHAL

Knowledge transfer is widely held to be a primary mechanism that enables humans to quickly learn new complex concepts when given only small training sets. In this paper, we apply knowledge transfer to deep convolutional neural nets, which we argue are particularly well suited for knowledge transfer. Our initial results demonstrate that components of a trained deep convolutional neural net can constructively transfer information to another such net. Furthermore, this transfer is completed in such a way that one can envision creating a net that could learn new concepts throughout its lifetime. The experiments we performed involved training a Deep Convolutional Neural Net (DCNN) on a large training set containing 20 different classes of handwritten characters from the NIST Special Database 19. This net was then used as a foundation for training a new net on a set of 20 different character classes from the NIST Special Database 19. The new net would keep the bottom layers of the old net (i.e. those nearest to the input) and only allow the top layers to train on the new character classes. We purposely used small training sets for the new net to force it to rely as much as possible upon transferred knowledge as opposed to a large and varied training set to learn the new set of hand written characters. Our results show a clear advantage in relying upon transferred knowledge to learn new tasks when given small training sets, if the new tasks are sufficiently similar to the previously mastered one. However, this advantage decreases as training sets increase in size.


2019 ◽  
Author(s):  
Daniele Linaro ◽  
Ben Vermaercke ◽  
Ryohei Iwata ◽  
Arjun Ramaswamy ◽  
Brittany A. Davis ◽  
...  

SummaryHow neural circuits develop in the human brain has remained almost impossible to study at the neuronal level. Here we investigate human cortical neuron development, plasticity and function, using a mouse/human chimera model in which xenotransplanted human cortical pyramidal neurons integrate as single cells into the mouse cortex. Combined neuronal tracing, electrophysiology, andin vivostructural and functional imaging revealed that the human neurons develop morphologically and functionally following a prolonged developmental timeline, revealing the cell-intrinsic retention of juvenile properties of cortical neurons as an important mechanism underlying human brain neoteny. Following maturation, human neurons transplanted in the visual cortex display tuned responses to visual stimuli that are similar to those of mouse neurons, indicating capacity for physiological synaptic integration of human neurons in mouse cortical circuits. These findings provide new insights into human neuronal development, and open novel experimental avenues for the study of human neuronal function and diseases.Highlights:Coordinated morphological and functional maturation of ESC-derived human cortical neurons transplanted in the mouse cortex.Transplanted neurons display prolonged juvenile features indicative of intrinsic species-specific neoteny.Transplanted neurons develop elaborate dendritic arbors, stable spine patterns and long-term synaptic plasticity.In the visual cortex transplanted neurons display tuned visual responses that resemble those of the host cortical neurons.


Sign in / Sign up

Export Citation Format

Share Document