scholarly journals The physicality of representation

2021 ◽  
Author(s):  
Corey J. Maley

Representation is typically taken to be importantly separate from its physical implementation. This is exemplified in Marr's three-level framework, widely cited and often adopted in neuroscience. However, the separation between representation and physical implementation is not a necessary feature of information-processing systems. In particular, when it comes to analog computational systems, Marr's representational/algorithmic level and implementational level collapse into a single level. Insofar as analog computation is a better way of understanding neural computation than other notions, Marr's three-level framework must then be amended into a two-level framework. However, far from being a problem or limitation, this sheds lights on how to understand physical media as being representational, but without a separate, medium-independent representational level.

2019 ◽  
Vol 36 (2) ◽  
pp. 89-121 ◽  
Author(s):  
Luciana Parisi

As machines have become increasingly smart and have entangled human thinking with artificial intelligences, it seems no longer possible to distinguish among levels of decision-making that occur in the newly formed space between critical reasoning, logical inference and sheer calculation. Since the 1980s, computational systems of information processing have evolved to include not only deductive methods of decision, whereby results are already implicated in their premises, but have crucially shifted towards an adaptive practice of learning from data, an inductive method of retrieving information from the environment and establishing general premises. This shift in logical methods of decision-making does not simply concern technical apparatuses, but is a symptom of a transformation in logical thinking activated with and through machines. This article discusses the pioneering work of Katherine Hayles, whose study of the cybernetic and computational infrastructures of our culture particularly clarifies this epistemological transformation of thinking in relation to machines.


2017 ◽  
Vol 12 (1) ◽  
pp. 74-76 ◽  
Author(s):  
Christopher A. Miller

In this reaction to David Kaber’s article in this volume, the author points to an inherent problem in applying any “levels” scheme to the continuous, multidimensional space of human–automation relationships and behaviors. Discretization inherently carves a continuous, analog space into discrete blocks that, the claim is, one can treat homogenously. The author provides a counterexample using a common automated e-mail filtering system as an example of how applying a single “level-of-automation” category to the whole system (or even to information-processing stages of components within it) misrepresents and suppresses details about what the system is actually doing and how it interacts with human users. Discretization can be highly productive if it pares away confusing detail that distracts from underlying explanatory relationships, but, the author argues, not enough is known about human–automation interaction in all its variability to effectively suppress detail. Thus one needs the better models Kaber is calling for before being able to create an effective levels-of-automation scheme, not vice versa.


2012 ◽  
Vol 32 (7) ◽  
pp. 1222-1232 ◽  
Author(s):  
Clare Howarth ◽  
Padraig Gleeson ◽  
David Attwell

The brain's energy supply determines its information processing power, and generates functional imaging signals. The energy use on the different subcellular processes underlying neural information processing has been estimated previously for the grey matter of the cerebral and cerebellar cortex. However, these estimates need reevaluating following recent work demonstrating that action potentials in mammalian neurons are much more energy efficient than was previously thought. Using this new knowledge, this paper provides revised estimates for the energy expenditure on neural computation in a simple model for the cerebral cortex and a detailed model of the cerebellar cortex. In cerebral cortex, most signaling energy (50%) is used on postsynaptic glutamate receptors, 21% is used on action potentials, 20% on resting potentials, 5% on presynaptic transmitter release, and 4% on transmitter recycling. In the cerebellar cortex, excitatory neurons use 75% and inhibitory neurons 25% of the signaling energy, and most energy is used on information processing by non-principal neurons: Purkinje cells use only 15% of the signaling energy. The majority of cerebellar signaling energy use is on the maintenance of resting potentials (54%) and postsynaptic receptors (22%), while action potentials account for only 17% of the signaling energy use.


2020 ◽  
Author(s):  
Samantha P. Sherrill ◽  
Nicholas M. Timme ◽  
John M. Beggs ◽  
Ehren L. Newman

ABSTRACTCortical information processing requires synergistic integration of input. Understanding the determinants of synergistic integration–a form of computation–in cortical circuits is therefore a critical step in understanding the functional principles underlying cortical information processing. We established previously that synergistic integration varies directly with the strength of feedforward connectivity. What relationship recurrent and feedback connectivity have with synergistic integration remains unknown. To address this, we analyzed the spiking activity of hundreds of well-isolated neurons in organotypic cultures of mouse somatosensory cortex, recorded using a high-density 512-channel microelectrode array. We asked how empirically observed synergistic integration, quantified through partial information decomposition, varied with local functional network structure. Toward that end, local functional network structure was categorized into motifs with varying recurrent and feedback connectivity. We found that synergistic integration was elevated in motifs with greater recurrent connectivity and was decreased in motifs with greater feedback connectivity. These results indicate that the directionality of local connectivity, beyond feedforward connections, has distinct influences on neural computation. Specifically, more upstream recurrence predicts greater downstream computation, but more feedback predicts lesser computation.


2009 ◽  
Vol 30 (2) ◽  
pp. 403-414 ◽  
Author(s):  
Clare Howarth ◽  
Claire M Peppiatt-Wildman ◽  
David Attwell

The brain's energy supply determines its information processing power, and generates functional imaging signals, which are often assumed to reflect principal neuron spiking. Using measured cellular properties, we analysed how energy expenditure relates to neural computation in the cerebellar cortex. Most energy is used on information processing by non-principal neurons: Purkinje cells use only 18% of the signalling energy. Excitatory neurons use 73% and inhibitory neurons 27% of the energy. Despite markedly different computational architectures, the granular and molecular layers consume approximately the same energy. The blood vessel area supplying glucose and O2 is spatially matched to energy consumption. The energy cost of storing motor information in the cerebellum was also estimated.


Author(s):  
Paul Smolensky

Is thought computation over ideas? Turing, and many cognitive scientists since, have assumed so, and formulated computational systems in which meaningful concepts are encoded by symbols which are the objects of computation. Cognition has been carved into parts, each a function defined over such symbols. This paper reports on a research program aimed at computing these symbolic functions without computing over the symbols. Symbols are encoded as patterns of numerical activation over multiple abstract neurons, each neuron simultaneously contributing to the encoding of multiple symbols. Computation is carried out over the numerical activation values of such neurons, which individually have no conceptual meaning. This is massively parallel numerical computation operating within a continuous computational medium. The paper presents an axiomatic framework for such a computational account of cognition, including a number of formal results. Within the framework, a class of recursive symbolic functions can be computed. Formal languages defined by symbolic rewrite rules can also be specified, the subsymbolic computations producing symbolic outputs that simultaneously display central properties of both facets of human language: universal symbolic grammatical competence and statistical, imperfect performance.


Sign in / Sign up

Export Citation Format

Share Document