Determination of Complex Reaction Mechanisms
Latest Publications


TOTAL DOCUMENTS

13
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195178685, 9780197562277

Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

The topic of this chapter may seem like a digression from methods and approaches to reaction mechanisms, but it is not; it is an introduction to it. We worked on both topics for some time and there is a basic connection. Think of an electronic device and ask: how are the logic functions of this device determined? Electronic inputs (voltages and currents) are applied and outputs are measured. A truth table is constructed and from this table the logic functions of the device, and at times some of its components, may be inferred. The device is not subjected to the approach toward a chemical mechanism described in the previous chapter, of taking the device apart and testing its simplest components. (That may have to be done sometimes but is to be avoided if possible.) Can such an approach be applicable to chemical systems? We show this to be the case by discussing the implementation of logic and computational devices, both sequential machines such as a universal Turing machine (hand computers, laptops) and parallel machines, by means of macroscopic kinetics; by giving a brief comparison with neural networks; by showing the presence of such devices in chemical and biochemical reaction systems; and by presenting some confirming experiments. The next step is clear: if macroscopic chemical kinetics can carry out these electronic functions, then there are likely to be new approaches possible for the determination of complex reaction mechanisms, analogs of such determinations for electronic components. The discussion in the remainder of this chapter is devoted to illustrations of these topics; it can be skipped, except the last paragraph, without loss of continuity with chapter 5 and beyond. A neuron is either on or off depending on the signals it has received. A chemical neuron is a similar device.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

There is enormous interest in the biology of complex reaction systems, be it in metabolism, signal transduction, gene regulatory networks, protein synthesis, and many others. The field of the interpretation of experiments on such systems by application of the methods of information science, computer science, and biostatistics is called bioinformatics (see for a presentation of this subject). Part of it is an extension of the chemical approaches that we have discussed for obtaining information on the reaction mechanisms of complex chemical systems to complex biological and genetic systems. We present here a very brief introduction to this field, which is exploding with scientific and technical activity. No review is intended, only an indication of several approaches on the subject of our book, with apologies for the omission of vast numbers of publications. A few reminders: The entire complement of DNA molecules constitute the genome, which consists of many genes. RNA is generated from DNA in a process called transcription; the RNA that codes for proteins is known as messenger RNA, abbreviated tomRNA. Other RNAs code for functional molecules such as transfer RNAs, ribosomal components, and regulatory molecules, or even have enzymatic function. Protein synthesis is regulated by many mechanisms, including that for transcription initiation, RNA splicing (in eukaryotes), mRNA transport, translation initiation, post-translational modifications, and degradation of mRNA. Proteins perform perhaps most cellular functions. Advances in microarray technology, with the use of cDNA or oligonucleotides immobilized in a predefined organization on a solid phase, have led to measurements of mRNA expression levels on a genome-wide scale (see chapter 3). The results of the measurements can be displayed on a plot on which a row represents one gene at various times, a column the whole set of genes, and the time of gene expression is plotted along the axis of rows. The changes in expression levels, as measured by fluorescence, are indicated by colors, for example green for decreased expression, black for no change in expression, and red for increased expression. Responses in expression levels have been measured for various biochemical and physiological conditions. We turn now to a few methods of obtaining information on genomic networks from microarray measurements.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

The mathematical computational method of genetic algorithms is frequently useful in solving optimization problems in systems with many parameters, for example, a search for suitable parameters of a given problem that achieves a stated purpose. The method searches for these parameters in an efficient parallel way, and has some analogies with evolution. There are other optimization methods available, such as stimulated annealing, but we shall use genetic algorithms. We shall present three different problems that give an indication of the diversity of applications. We begin with a very short primer on genetic algorithms, which can be omitted if the reader has some knowledge of this subject. Genetic algorithms (GAs) work with a coding of a parameter set, which in the field of chemical kinetics may consist of a number of parameters, such as rate coefficients; variables and constraints, such as concentrations; and other quantities such as chemical species. Binary coding for a parameter is done as follows. Suppose we have a rate coefficient = 9.08 × 10−7; then if we write that rate coefficient as 10−P , with −10 ≤ P ≤ 10, a binary coding with string length of 16 bits is given by . . . P = 10 − 20 R /(216 − 1) (10.1) . . . where 0 ≤ R ≤ 216 − 1. Since P = 6.04 we have R = 12,971, or R = 0011001010101010 to the base 2. Thus the value of the rate coefficient is encoded in a single bit string, called a chromosome. For the solution of a given problem an optimization criterion must be chosen. With a given choice of parameters this criterion is calculated; the comparison of that calculation with the goal set for the criterion gives a fitness value for that set of parameters. If the fitness is adequate but not sufficient, when both are selected by prior choice, for any individual, then retain that individual for the next generation. Reject individuals below that choice. Select individuals for the next generation with a probability proportional to the fitness value from a roulette wheel on which the slot size is proportional to the fitness value. Notice that genetic algorithms use probabilistic, not deterministic, transition rules.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

Consider a chemical reaction system with many chemical species; it may be in a transient state but it is easier to think of it in a stable stationary state, not necessarily but usually away from equilibrium. We wish to probe the responses of the concentrations of the chemical species to a pulse perturbation of one of the chemical species. The pulse need not be small; it can be of arbitrary magnitude. This is analogous to providing a given input to one variable of an electronic system and measuring the outputs of the other variables. The method presented in this chapter gives causal connectivities of one reacting species with another as well as regulatory features of a reaction network. Much more will be said about the responses of chemical and other systems to pulses and other perturbations in chapter 12. The effects of small perturbations on reacting systems have been investigated in a number of studies, to which we return in chapters 9 and 13. Let us begin simply: Consider a series of first-order reactions as in fig. 5.1, which shows an unbranched chain of reversible reactions. We shall not be restricted to first-order reactions but can learn a lot from this example. Let there be an influx of k0 molecules of X1 and an outflow of k8 molecules of X8 per unit time. We assume that the reaction proceeds from left to right and hence the Gibbs free energy change for each step and for the overall reaction in that direction is negative. The mass action law for the kinetic equations, say that of X2, is . . . dX2/dt = k1X1 + k−2X3 − (k−1 + k2) X2 (5.1) . . . If all the time derivatives of the concentrations are zero, then the system is in a stationary state. Suppose we perturb that stationary state with an increase in X1 by an arbitrary amount and solve the kinetic equations numerically for the variations of the concentrations as a function of time, as the system returns to the stationary state. A plot of such a relaxation is shown in fig. 5.2.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

Chemical kinetics as a science has existed for more than a century. It deals with the rates of reactions and the details of how a given reaction proceeds from reactants to products. In a chemical system with many chemical species, there are several questions to be asked: What species react with what other species? In what temporal order? With what catalysts? And with what results? The answers constitute the macroscopic reaction mechanism. The process can be described macroscopically by listing the reactants, intermediates, products, and all the elementary reactions and catalysts in the reaction system. The present book is a treatise and text on the determination of complex reaction mechanisms in chemistry and in chemical reaction systems that occur in chemical engineering, biochemistry, biology, biotechnology, and genomics. A basic knowledge of chemical kinetics is assumed. Several approaches are suggested for the deduction of information on the causal chemical connectivity of the species, on the elementary reactions among the species, and on the sequence of the elementary reactions that constitute the reaction pathway and the reaction mechanism. Chemical reactions occur by the collisions of molecules, and such an event is called an elementary reaction for specified reactant and product molecules. A balanced stoichiometric equation for an elementary reaction yields the number of each type of molecule according to conservation of atoms, mass, and charge. Figure 1.1 shows a relatively simple reaction mechanism for the decomposition of ozone by light, postulated to occur in a series of three elementary steps. (The details of collisions of molecules and bond rearrangements are not discussed.) All approaches are based on the measurements of the concentrations of chemical species in the whole reaction system, not on parts, as has been the practice. One approach is called the pulse method, in which a pulse of concentration of one or more species of arbitrary strength is applied to a reacting system and the responses of as many species as possible are measured. From these responses causal chemical connectivities may be inferred. The basic theory is explained, demonstrated on a model mechanism, and tested in an experiment on a part of glycolysis.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

In this chapter we present an experimental test case of the deduction of a reaction pathway and mechanism by means of correlation metric construction from time-series measurements of the concentrations of chemical species. We choose as the system an enzymatic reaction network, the initial steps of glycolysis. Glycolysis is central in intermediary metabolism and has a high degree of regulation. The reaction pathway has been well studied and thus it is a good test for the theory. Further, the reaction mechanism of this part of glycolysis has been modeled extensively. The quantity and precision of the measurements reported here are sufficient to determine the matrix of correlation functions and, from this, a reaction pathway that is qualitatively consistent with the reaction mechanism established previously. The existence of unmeasured species did not compromise the analysis. The quantity and precision of the data were not excessive, and thus we expect the method to be generally applicable. This CMC experiment was carried out in a continuous-flow stirred-tank reactor (CSTR). The reaction network considered consists of eight enzymes, which catalyze the conversion of glucose into dihydroxyacetone phosphate and glyceraldehyde phosphate. The enzymes were confined to the reactor by an ultrafiltration membrane at the top of the reactor. The membrane was permeable to all low molecular weight species. The inputs are (1) a reaction buffer, which provides starting material for the reaction network to process, maintains pH and pMg, and contains any other species that act as constant constraints on the system dynamics, and (2) a set of “control species” (at least one), whose input concentrations are changed randomly every sampling period over the course of the experiment. The sampling period is chosen such that the system almost, but not quite, relaxes to a chosen nonequilibrium steady state. The system is kept near enough to its steady state to minimize trending (caused by the relaxation) in the time series, but far enough from the steady state that the time-lagged autocorrelation functions for each species decay to zero over three to five sampling periods. This long decay is necessary if temporal ordering in the network is to be analyzed.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

It is useful to have a brief discussion of some kinetic processes that we shall treat in later chapters. Some, but not all, of the material in this chapter is presented in [1] in more detail. A macroscopic, deterministic chemical reacting system consists of a number of different species, each with a given concentration (molecules or moles per unit volume). The word “macroscopic” implies that the concentrations are of the order of Avogadro’s number (about 6.02 × 1023) per liter. The concentrations are constant at a given instant, that is, thermal fluctuations away from the average concentration are negligibly small (more in section 2.3). The kinetics in many cases, but far from all, obeys mass action rate expressions of the type . . . dA/dt = k(T )AαBβ . . . (2.1) . . . where T is temperature, A is the concentration of species A, the same for B, and possibly other species indicated by dots in the equation, and α and β are empirically determined “orders” of reaction. The rate coefficient k is generally a function of temperature and frequently a function of T only. The dependence of k on T is given empirically by the Arrhenius equation . . . k(T ) = C exp−Ea/RT (2.2) . . . where C, the frequency factor, is either nearly constant or a weakly dependent function of temperature, and Ea is the activation energy. Rate coefficients are averages of reaction cross-sections, as measured for example by molecular beam experiments. The a priori calculation of cross-sections from quantum mechanical fundamentals is extraordinarily difficult and has been done to good accuracy only for the simplest trimolecular systems (such as D + H2). A widely used alternative approach is based on activated complex theory. In its simplest form, two reactants collide and form an activated complex, said to be in equilibrium. One degree of freedom of the complex, a vibration, is allowed to lead to the dissociation of the complex to products, and the rate of that dissociation is taken to be the rate of the reaction.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

We discussed some aspects of the responses of chemical systems, linear or nonlinear, to perturbations on several earlier occasions. The first was the responses of the chemical species in a reaction mechanism (a network) in a nonequilibrium stable stationary state to a pulse in concentration of one species. We referred to this approach as the “pulse method” (see chapter 5 for theory and chapter 6 for experiments). Second, we studied the time series of the responses of concentrations to repeated random perturbations, the formulation of correlation functions from such measurements, and the construction of the correlation metric (see chapter 7 for theory and chapter 8 for experiments). Third, in the investigation of oscillatory chemical reactions we showed that the responses of a chemical system in a stable stationary state close to a Hopf bifurcation are related to the category of the oscillatory reaction and to the role of the essential species in the system (see chapter 11 for theory and experiments). In each of these cases the responses yield important information about the reaction pathway and the reaction mechanism. In this chapter we focus on the design of simple types of response experiments that make it possible to extract mechanistic and kinetic information from complex nonlinear reaction systems. The main idea is to use “neutral” labeled compounds (tracers), which have the same kinetic and transport properties as the unlabeled compounds. In our previous work we have shown that by using neutral tracers a class of response experiments can be described by linear response laws, even though the underlying kinetic equations are highly nonlinear. The linear response is not the result of a linearization procedure, but it is due to the use of neutral tracers. As a result the response is linear even for large perturbations, making it possible to investigate global nonlinear kinetics by making use of linear mathematical techniques. Moreover, the susceptibility functions from the response law are related to the probability densities of the lifetimes and transit times of the various chemical species, making it easy to establish a connection between the response data and the mechanism and kinetics of the process.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

Oscillating chemical reactions have the distinct property of a periodic or aperiodic oscillatory course of concentrations of reacting chemical species as well as temperature. This behavior is due to an interplay of positive and negative feedback with alternating dominance of these two dynamic effects. For example, an exothermic reaction produces heat that increases temperature, which in turn increases reaction rate and thus produces more heat. Such a thermokinetic effect is thus autocatalytic and represents a positive feedback. When run in a flow-through reactor with a cooling jacket, the autocatalysis is eventually suppressed if the reactant is consumed faster than it is supplied. At the same time, the excess heat is being removed via the jacket, which tends to quench the system. The latter two processes are inhibitory and represent a negative feedback. If the heat removal is slow enough so as not to suppress entirely the autocatalysis, but fast enough for temperature to drop before there is enough reactant available via the feed to restore autocatalysis, then there are oscillations in both temperature and concentration of the reactant. Examples of these thermokinetic oscillations are combustion reactions, which typically take place either in homogeneous gaseous or liquid phase or in the presence of a solid catalyst, thus representing a heterogeneous reaction system. Of more interest in the present context are reactions where thermal effects are often negligible, or the system is maintained at constant temperature, as is the case with homogeneous chemical reactions taking place in a thermostated flow-through reactor, as well as biochemical reactions in living cells and organisms. Autocatalysis can easily be realized in isothermal systems, where instead of a heat-producing reaction there will typically be a closed reaction pathway, such that species involved are produced faster by reactions along the pathway than they are consumed by removal reactions. As an example, let us examine the well-known Belousov–Zhabotinsky (BZ) reaction of bromate with malonic acid catalyzed by cerium ions in acidic solution.


Author(s):  
John Ross ◽  
Igor Schreiber ◽  
Marcel O. Vlad

In this chapter we develop further the theory of the correlation method introduced in chapter 7. Consider the expression of the pair correlation function where an ensemble average replaces the average over a time series of experiments on a single system. The pair correlation function defined in eq. (7.1) is the second moment of the pair distribution function and is obtained by integration. We choose a new measure of the correlation distance, one based on an information theoretical formulation. A natural measure of the correlation distance between two variables is the number of states jointly available to them (the size of the support set) compared to the number of states available to them individually. We therefore require that the measure of the statistical closeness between variables X and Y be the fraction of the number of states jointly available to them versus the total possible number of states available to X and Y individually. Further, we demand that the measure of the support sets weighs the states according to their probabilities. Thus, two variables are close and the support set is small if the knowledge of one predicts the most likely state of the other, even if there exists simultaneously a substantial number of other states. The information entropy gives the distance we demand in these requirements. The effective size of the support set of a continuous variable is . . . S(X) = eh(X) (9.2) . . . in which the entropy h(X) is defined by where S(X) is the support set of X and p(x) is the probability density of X. Similarly, we denote the entropy of a pair of continuous variables X,Y, as h(X,Y), which is related to the pair distribution function p(x,y) by an equation analogous to.


Sign in / Sign up

Export Citation Format

Share Document