scholarly journals Artificial neural networks: powerful tools for modeling chaotic behavior in the nervous system

Author(s):  
Malihe Molaie ◽  
Razieh Falahian ◽  
Shahriar Gharibzadeh ◽  
Sajad Jafari ◽  
Julien C. Sprott
Qui Parle ◽  
2021 ◽  
Vol 30 (1) ◽  
pp. 159-184
Author(s):  
Matteo Pasquinelli

Abstract It was not a cybernetician but a neoliberal economist who provided the first systematic treatise on connectionism or, as it would later be known, the paradigm of artificial neural networks. In his 1952 book The Sensory Order, Friedrich Hayek advanced a connectionist theory of the mind already far more advanced than the theory of symbolic artificial intelligence, whose birth is redundantly celebrated in 1956 with the exalted Dartmouth workshop. In this text Hayek provided a synthesis of Gestalt principles and considerations of artificial neural networks, even speculating about the possibility of a machine fulfilling a similar function of “the nervous system as an instrument of classification,” auguring what we call today a “classifier algorithm.” This article shows how Hayek’s connectionist theory of the mind was used to shore up a specific and ideological view of the market and schematically reconstructs Hayek’s line of argumentation from his economic paradigm backward to his theory of cognition. Eventually, in Hayek’s interpretation, connectionism provides a relativist cognitive paradigm that justifies the “methodological individualism” of neoliberalism.


Author(s):  
Emilio Del-Moral-Hernandez

Artificial Neural Networks have proven, along the last four decades, to be an important tool for modelling of the functional structures of the nervous system, as well as for the modelling of non-linear and adaptive systems in general, both biological and non biological (Haykin, 1999). They also became a powerful biologically inspired general computing framework, particularly important for solving non-linear problems with reduced formalization and structure. At the same time, methods from the area of complex systems and non-linear dynamics have shown to be useful in the understanding of phenomena in brain activity and nervous system activity in general (Freeman, 1992; Kelso, 1995). Joining these two areas, the development of artificial neural networks employing rich dynamics is a growing subject in both arenas, theory and practice. In particular, model neurons with rich bifurcation and chaotic dynamics have been developed in recent decades, for the modelling of complex phenomena in biology as well as for the application in neuro-like computing. Some models that deserve attention in this context are those developed by Kazuyuki Aihara (1990), Nagumo and Sato (1972), Walter Freeman (1992), K. Kaneko (2001), and Nabil Farhat (1994), among others. The following topics develop the subject of Chaotic Neural Networks, presenting several of the important models of this class and briefly discussing associated tools of analysis and typical target applications.


Author(s):  
Brian P. McLaughlin

Connectionism is an approach to computation that uses connectionist networks. A connectionist network is composed of information-processing units (or nodes); typically, many units process information simultaneously, giving rise to massively ‘parallel distributed processing’. Units process information only locally: they respond only to their specific input lines by changing or retaining their activation values; and they causally influence the activation values of their output units by transmitting amounts of activation along connections of various weights or strengths. As a result of such local unit processing, networks themselves can behave in rule-like ways to compute functions. The study of connectionist computation has grown rapidly since the early 1980s and now extends to every area of cognitive science. For the philosophy of psychology, the primary interest of connectionist computation is its potential role in the computational theory of cognition – the theory that cognitive processes are computational. Networks are employed in the study of perception, memory, learning and categorization; and it has been claimed that connectionism has the potential to yield an alternative to the classical view of cognition as rule-governed symbol manipulation. Since cognitive capacities are realized in the central nervous system, perhaps the most attractive feature of the connectionist approach to cognitive modelling is the neural-like aspects of network architectures. The members of a certain family of connectionist networks, artificial neural networks, have proved to be a valuable tool for investigating information processing within the nervous system. In artificial neural networks, units are neuron-like; connections, axon-like; and the weights of connections function in ways analogous to synapses. Another attraction is that connectionist networks, with their units sensitive to varying strengths of multiple inputs, carry out in natural ways ‘multiple soft constraint satisfaction’ tasks – assessing the extent to which a number of non-mandatory, weighted constraints are satisfied. Tasks of this sort occur in motor-control, early vision, memory, and in categorization and pattern recognition. Moreover, typical networks can re-programme themselves by adjusting the weights of the connections among their units, thereby engaging in a kind of ‘learning’; and they can do so even on the basis of the sorts of noisy and/or incomplete data people typically encounter. The potential role of connectionist architectures in the computational theory of cognition is, however, an open question. One possibility is that cognitive architecture is a ‘mixed architecture’, with classical and connectionist modules. But the most widely discussed view is that cognitive architecture is thoroughly connectionist. The leading challenge to this view is that an adequate cognitive theory must explain high-level cognitive phenomena such as the systematicity of thought (someone who can think ‘The dog chases the cat’ can also think ‘The cat chases the dog’), its productivity (our ability to think a potential infinity of thoughts) and its inferential coherence (people can infer ‘p’ from ‘p and q’). It has been argued that a connectionist architecture could explain such phenomena only if it implements a classical, language-like symbolic architecture. Whether this is so, however, and, indeed, even whether there are such phenomena to be explained, are currently subjects of intense debate.


2004 ◽  
Vol 27 (5) ◽  
pp. 700-700 ◽  
Author(s):  
Paul A. Koch ◽  
Gerry Leisman

Artificial neural networks have weaknesses as models of cognition. A conventional neural network has limitations of computational power. The localist representation is at least equal to its competition. We contend that locally connected neural networks are perfectly capable of storing and retrieving the individual features, but the process of reconstruction must be otherwise explained. We support the localist position but propose a “hybrid” model that can begin to explain cognition in anatomically plausible terms.


Author(s):  
Kobiljon Kh. Zoidov ◽  
◽  
Svetlana V. Ponomareva ◽  
Daniel I. Serebryansky ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document