Biophysics of Computation
Latest Publications


TOTAL DOCUMENTS

22
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195104912, 9780197562338

Author(s):  
Christof Koch

This chapter represents somewhat of a tephnical interlude. Having introduced the reader to both simplified and more complex compartmental single neuron models, we need to revisit terrain with which we are already somewhat familiar. In the following pages we reevaluate two important concepts we defined in the first few chapters: the somatic input resistance and the neuronal time constant. For passive systems, both are simple enough variables: Rin is the change in somatic membrane potential in response to a small sustained current injection divided by the amplitude of the current injection, while τm is the slowest time constant associated with the exponential charging or discharging of the neuronal membrane in response to a current pulse or step. However, because neurons express nonstationary and nonlinear membrane conductances, the measurement and interpretation of these two variables in active structures is not as straightforward as before. Having obtained a more sophisticated understanding of these issues, we will turn toward the question of the existence of a current, voltage, or charge threshold at which a biophysical faithful model of a cell triggers action potentials. We conclude with recent work that suggests how concepts from the subthreshold domain, like the input resistance or the average membrane potential, could be extended to the case in which the cell is discharging a stream of action potentials. This chapter is mainly for the cognoscendi or for those of us that need to make sense of experimental data by comparing therp to theoretical models that usually fail to reflect reality adequately. In Sec. 3.4, we defined Kii (f) for passive cable structures as the voltage change at location i in response to a sinusoidal current injection of frequency f at the same location. Its dc component is also referred to as input resistance or Rin. Three difficulties render this definition of input resistance problematic in real cells: (1) most membranes, in particular at the soma, show voltage-dependent nonlinearities, (2) the associated ionic membrane conductances are time dependent and (3) instrumental aspects, such as the effect of the impedance of the recording electrode on Rin, add uncertainty to the measuring process.


Author(s):  
Christof Koch

In Chap. 9 we introduced calcium ions and alluded to their crucial role in regulating the day-to-day life of neurons. The dynamics of the free intracellular calcium is controlled by a number of physical and chemical processes, foremost among them diffusion and binding to a host of different proteins, which serve as calcium buffers and as calcium sensors or triggers. Whereas buffers simply bind Ca2+ above some critical concentration, releasing it back into the cytoplasm when [Ca2+]i has been reduced below this level, certain proteins— such as calmodulin—change their conformation when they bind with Ca2+ ions, thereby activating or modulating enzymes, ionic channels, or other proteins. The calcium concentration inside the cell not only determines the degree of activation of calcium-dependent potassium currents but—much more importantly—is relevant for determining the changes in structure expressed in synaptic plasticity. As discussed in Chap. 13, it is these changes that are thought to underlie learning. Given the relevance of second messenger molecules, such as Ca2+, IP3, cyclic AMP and others, for the processes underlying growth, sensory adaptation, and the establishment and maintenance of synaptic plasticity, it is crucial that we have some understanding of the role that diffusion and chemical kinetics play in governing the behavior of these substances. Today, we have unprecedented access to the spatio-temporal dynamics of intracellular calcium in individual neurons using fluorescent calcium dyes, such as fura-2 or fluo-3, in combination with confocal or two-photon microscopy in the visible or in the infrared spectrum (Tsien, 1988; Tank et al., 1988; Hernández-Cruz, Sala, and Adams, 1990; Ghosh and Greenberg, 1995).


Author(s):  
Christof Koch

Nerve cells are the targets of many thousands of excitatory and inhibitory synapses. An extreme case are the Purkinje cells in the primate cerebellum, which receive between one and two hundred thousand synapses onto dendritic spines from an equal number of parallel fibers (Braitenberg and Atwood, 1958; Llinas and Walton, 1998). In fact, this structure has a crystalline-like quality to it, with each parallel fiber making exactly one synapse onto a spine of a Purkinje cell. For neocortical pyramidal cells, the total number of afferent synapses is about an order of magnitude lower (Larkman, 1991). These numbers need to be compared against the connectivity in the central processing unit (CPU) of modern computers, where the gate of a typical transistor usually receives input from one, two, or three other transistors or connects to one, two, or three other transistor gates. The large number of synapses converging onto a single cell provide the nervous system with a rich substratum for implementing a very large class of linear and nonlinear neuronal operations. As we discussed in the introductory chapter, it is only these latter ones, such as multiplication or a threshold operation, which are responsible for “computing” in the nontrivial sense of information processing. It therefore becomes crucial to study the nature of the interaction among two or more synaptic inputs located in the dendritic tree. Here, we restrict ourselves to passive dendritic trees, that is, to dendrites that do not contain voltage-dependent membrane conductances. While such an assumption seemed reasonable 20 or even 10 years ago, we now know that the dendritic trees of many, if not most, cells contain significant nonlinearities, including the ability to generate fast or slow all-or-none electrical events, so-called dendritic spikes. Indeed, truly passive dendrites may be the exception rather than the rule in the nervous In Sec. 1.5, we studied this interaction for the membrane patch model. With the addition of the dendritic tree, the nervous system has many more degrees of freedom to make use of, and the strength of the interaction depends on the relative spatial positioning, as we will see now. That this can be put to good use by the nervous system is shown by the following experimental observation and simple model.


Author(s):  
Christof Koch

In the previous chapter, we briefly met some of the key actors of this book. In particular, we introduced the RC model of a patch of neuronal membrane and showed an instance where such a “trivial” model accounts reasonably well for the input-output properties of a neuron, as measured at its cell body. However, almost none of the excitatory synapses are made onto the cell body, contacting instead the very extensive dendritic arbor. As we will discuss in detail in Chap. 3, dendritic trees can be quite large, containing up to 98% of the entire neuronal surface area. We therefore need to understand the behavior of these extended systems having a cablelike structure. The basic equation governing the dynamics of the membrane potential in thin and elongated neuronal processes, such as axons or dendrites, is the cable equation. It originated in the middle of the last century in the context of calculations carried out by Lord Kelvin, who described the spread of potential along the submarine telegraph cable linking Great Britain and America. Around the turn of the century, Herman and others formulated the concept of Kemleitermodel, or core conductor model, to understand the flow of current in nerve axons. Such a core conductor can be visualized as a thin membrane or sheath surrounding a cylindrical and electrically conducting core of constant cross section placed in a solution of electrolytes. The study of the partial differential equations describing the evolution of the electrical potential in these structures gave rise to a body of theoretical knowledge termed cable theory. In the 1930s and 1940s concepts from cable theory were being applied to axonal fibers, in particular to the giant axon of the squid (Hodgkin and Rushton, 1946; Davis and Lorente de No, 1947). The application of cable theory to passive, spatially extended dendrites started in the late 1950s and blossomed in the 1960s and 1970s, primarily due to the work of Rail (1989). In an appropriate gesture acknowledging his role in the genesis of quantitative modeling of single neurons, Segev, Rinzel, and Shepherd (1995) edited an annotated collection of his papers, to which we refer the interested reader. It also contains personal recollections from many of Rail's colleagues as well as historical accounts of the early history of this field.


Author(s):  
Christof Koch

The brain computes! This is accepted as a truism by the majority of neuroscientists engaged in discovering the principles employed in the design and operation of nervous systems. What is meant here is that any brain takes the incoming sensory data, encodes them into various biophysical variables, such as the membrane potential or neuronal firing rates, and subsequently performs a very large number of ill-specified operations, frequently termed computations, on these variables to extract relevant features from the input. The outcome of some of these computations can be stored for later access and will, ultimately, control the motor output of the animal in appropriate ways. The present book is dedicated to understanding in detail the biophysical mechanisms responsible for these computations. Its scope is the type of information processing underlying perception and motor control, occurring at the millisecond to fraction of a second time scale. When you look at a pair of stereo images trying to fuse them into a binocular percept, your brain is busily computing away trying to find the “best” solution. What are the computational primitives at the neuronal and subneuronal levels underlying this impressive performance, unmatched by any machine? Naively put and using the language of the electronic circuit designer, the book asks: “What are the diodes and the transistors of the brain?” and “What sort of operations do these elementary circuit elements implement?” Contrary to received opinion, nerve cells are considerably more complex than suggested by work in the neural network community. Like morons, they are reduced to computing nothing but a thresholded sum of their inputs. We know, for instance, that individual nerve cells in the locust perform an operation akin to a multiplication. Given synapses, ionic channels, and membranes, how is this actually carried out? How do neurons integrate, delay, or change their output gain? What are the relevant variables that carry information? The membrane potential? The concentration of intracellular Ca2+ ions? What is their temporal resolution? And how large is the variability of these signals that determines how accurately they can encode information? And what variables are used to store the intermediate results of these computations? And where does long-term memory reside? Natural philosophers and scientists in the western world have always compared the brain to the most advanced technology of the day.


Author(s):  
Christof Koch

As discussed in the introduction to this book, any (bio)physical mechanism that transforms some physical variable, such as the electrical potential across the membrane, in such a way that it can be mapped onto a meaningful formal mathematical operation, such as delayand- correlate or convolution, can be treated as a computation. Traditionally only Vm, spike trains, and the firing rate f(t) have been thought to play this role in the computations performed by the nervous system. Due to the recent and widespread usage of high-resolution calcium-dependent fluorescent dyes, the concentration of free intracellular calcium [Ca2+]i in presynaptic terminals, dendrites, and cell bodies has been promoted into the exalted rank of a variable that can act as a short-term memory and that can be manipulated using buffers, calcium-dependent enzymes, and diffusion in ways that can be said to instantiate specific computations. But why stop here? Why not consider the vast number of signaling molecules that are localized to specific intra- or extracellular compartments to instantiate specific computations that can act over particular spatial and temporal time scales? And what about the peptides and hormones that are released into large areas of the brain or that circulate in the bloodstream? In this penultimate chapter, we will acquaint the reader with several examples of computations that use such unconventional means. The computation in question constitutes a molecular switch that stores a few bits of information at each of the thousands of synapses on a typical cortical cell. In order to describe its principle of operation, it will be necessary to introduce the reader to some basic concepts in biochemistry. The ability of individual synapses to potentially store analog variables is important enough that this modest intellectual investment will pay off. (For an introduction to biochemistry, consult Stryer, 1995).


Author(s):  
Christof Koch

Now that we have quantified the behavior of the cell in response to current pulses and current steps as delivered by the physiologist's microelectrode, let us study the behavior of the cell responding to a more physiological input. For instance, a visual stimulus in the environment will activate cells in the retina and its target, neurons in the lateral geniculate nucleus. These, in turn, make on the order of 50 excitatory synapses onto the apical tree of a layer 5 pyramidal cell in primary visual cortex such as the one we use throughout the book, and about 100-150 synapses onto a layer 4 spiny stellate cell (Peters and Payne, 1993; Ahmed et al., 1994, 1996; Peters, Payne, and Rudd, 1994). All of these synapses will be triggered within a fraction of a millisecond (Alonso, Usrey, and Reid, 1996). Thus, any sensory input to a neuron is likely to activate on the order of 102 synapses, rather than one or two very specific synapses as envisioned in Chap. 5 in the discussion of synaptic AND-NOT logic. This chapter will reexamine the effect of synaptic input to a realistic dendritic tree. We will commence by considering a single synaptic input as a sort of baseline condition. This represents a rather artificial condition; but because the excitatory postsynaptic potential and current at the soma are frequently experimentally recorded and provide important insights into the situation prevailing in the presence of massive synaptic input, we will discuss them in detail. Next we will treat the case of many temporally dispersed synaptic inputs to a leaky integrate-and-fire model and to the passive dendritic tree of the pyramidal cell. In particular, we are interested in uncovering the exact relationship between the temporal input jitter and the output jitter. The bulk of this chapter deals with the effect of massive synaptic input onto the firing behavior of the cell, by making use of the convenient fiction that the detailed temporal arrangement of action potentials is irrelevant for neuronal information processing. This allows us to derive an analytical expression relating the synaptic input to the somatic current and ultimately to the output frequency of the cell.


Author(s):  
Christof Koch

In the previous chapters, we studied the spread of the membrane potential in passive or active neuronal structures and the interaction among two or more synaptic inputs. We have yet to give a full account of ionic channels, the elementary units underlying all of this dizzying variety of electrical signaling both within and between neurons. Ionic channels are individual proteins anchored within the bilipid membrane of neurons, glia, or other cells, and can be thought of as water-filled macromolecular pores that are permeable to particular ions. They can be exquisitely voltage sensitive, as the fast sodium channel responsible for the sodium spike in the squid giant axon, or they can be relatively independent of voltage but dependent on the binding of some neurotransmitter, as is the case for most synaptic receptors, such as the acetylcholine receptor at the vertebrate neuromuscular junction or the AMPA and GABA synaptic receptors mediating excitation and inhibition in the central nervous system. Ionic channels are ubiquitous and provide the substratum for all biophysical phenomena underlying information processing, including mediating synaptic transmission, determining the membrane voltage, supporting action potential initiation and propagation, and, ultimately, linking changes in the membrane potential to effective output, such as the secretion of a neurotransmitter or hormone or the contraction of a muscle fiber. Individual ionic channels are amazingly specific. A typical potassium channel can distinguish a K+ ion with a 1.33 Å radius from a Na+ ion of 0.95 Å radius, selecting the former over the latter by a factor of 10,000. This single protein can do this selection at a rate of up to 100 million ions each second (Doyle et al, 1998). At the time of Hodgkin and Huxley’s seminal study in the early 1950s, two broad classes of transport mechanisms were competing as plausible ways for carrying ionic fluxes across the membrane: carrier molecules and pores. At the time, no direct evidence for either one existed. It was not until the early 1970s that the fast ACh synaptic receptor and the Na channel were chemically isolated and purified and identified as proteins.


Author(s):  
Christof Koch

So far, we worked under the convenient fiction that active, voltage-dependent membrane conductances are confined to the spike initiation zone at or close to the cell body and that the dendritic tree is essentially passive. Under the influence of one-dimensional passive cable theory, as refined by Rail and his school (Chaps. 2 and 3), the passive model of dendritic integration of synaptic inputs has become dominant and is taught in all the textbooks. Paradoxically, from the earliest days of intracellular recordings from the fat dendrites of spinal cord motoneurons with the aid of glass microelectrodes, active dendritic responses had been witnessed (Brock, Coombs, and Eccles, 1952; Eccles, Libet, and Young, 1958). Today, there exists overwhelming evidence for a host of voltage-dependent sodium and calcium-conductances in the dendritic tree. In the following section we summarize the experimental evidence and discuss current biophysical modeling efforts focusing on the question of the existence and genesis of fast all-or-none electrical events in the dendrites. We then turn toward possible functional roles of active dendritic processing. One word of advice. It has been argued that linear cable theory as applied to dendrites and taught in the first chapters of this book is irrelevant in the face of all this evidence for active processing and can be relegated to the dustbin. However, this would be a mistake. Under many physiological conditions these nonlinearities will not be relevant. Even if they are, the resistive and capacitive cable properties of the dendrites profoundly influence the initiation and propagation of dendritic action potentials and other active phenomena. Thus, for a complete understanding of the events in active dendritic trees we need to be thoroughly versed in cable theory. The issue of dendritic all-or-none electrical events must be seen as separate from the broader question of the existence and nature of active, that is, voltage-dependent, membrane conductances in the dendritic tree.


Author(s):  
Christof Koch

Some neurons throughout the animal kingdom respond to an intracellular current injection or to an appropriate sensory stimulus with a stereotypical sequence of two to five fast spikes riding upon a slow depolarizing envelope. The entire event, termed a burst, is over within 10-40 msec and is usually terminated by a profound afterhyperpolarization (ΑΗΡ). Such bursting cells are not a random feature of a certain fraction of all cells but can be identified with specific neuronal subpopulations. What are the mechanisms generating this intrinsic firing pattern and what is its meaning? Bursting cells can easily be distinguished from a cell firing at a high maintained frequency by the fact that bursts will persist even at a low firing frequency. As illustrated by the thalamic relay cell of Fig. 9.4, some cells can switch between a mode in which they predominantly respond to stimuli via single, isolated spikes and one in which bursts are common. Because we believe that bursting constitutes a special manner of signaling important information, we devote a single, albeit small chapter to this topic. In the following, we describe a unique class of cells that frequently signal with bursts, and we touch upon the possible biophysical mechanisms that give rise to bursting. We finish this excursion by focussing on a functional study of bursting cells in the electric fish and speculate about the functional relevance of burst firing. Neocortical cells are frequently classified according to their response to sustained current injections. While these distinctions are not all or none, there is broad agreement for three classes: regular spiking, fast spiking, and intrinsically bursting neurons (Connors, Gutnick, and Prince, 1982; McCormick et al., 1985; Connors and Gutnick, 1990; Agmon and Connors, 1992; Baranyi, Szente, and Woody, 1993; Nuńez, Amzica, and Steriade, 1993; Gutnick and Crill, 1995; Gray and McCormick, 1996). Additional cell classes have been identified (e.g., the chattering cells that fire bursts of spikes with interburst intervals ranging from 15 to 50 msec; Gray and McCormick, 1996), but whether or not they occur widely has not yet been settled. The cells of interest to us are the intrinsically bursting cells.


Sign in / Sign up

Export Citation Format

Share Document