Perspectives on Adaptation in Natural and Artificial Systems
Latest Publications


TOTAL DOCUMENTS

15
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195162929, 9780197562116

Author(s):  
Kenneth J. Arrow

John Holland's work has combined what appears to be a complicated but technical contribution to giving approximate solutions for a class of difficult problems with a deepening of our understanding of the way we all do induction, of how the experience of the world modifies and improves our behavior and our decisions. Such a comprehension necessarily alters the viewpoint of the behavioral sciences. In this chapter, I want to concentrate on some of the already apparent ways in which he has altered our understanding of the complex dynamic system that constitutes the economic world. Of course, the genetic algorithm can be and has been used as a means of solving hard problems in economic analysis, as in any other field. That is, it is an aid to the analyst, and a powerful one. I want, however, to emphasize the second aspect of Holland's work, the implications of the genetic algorithm as a description of human problem-solving behavior in a complicated world. The economic world is complicated partly because it depends on the physical and biological world which governs the techniques of production. More interestingly, though, the economic world is complicated because the individuals in it are interacting through markets. It is an old observation among economists, going back to Adam Smith's observation of the invisible hand, that economic events are the results of human actions but are not necessarily an achievement of human intentions. In the murk of the economic world, individuals have to act. They have to make choices as to what they will consume, how much they will save, what goods they will produce and how they will produce them, and then what investments they will make. They make these choices with a view to their consequences, personal satisfaction today or in their future or the satisfaction of their heirs, the profits to be made now or in the future by producing goods, and the returns on their investments. These choices have an important time dimension; people and many of the things they buy, make, or sell last, and the outcomes of current decisions depend on events which will occur in the future. The life of the decisionmaker is uncertain, so is his or her health.


Author(s):  
Herbert A. Simon

In both the GA and GOFAI traditions, invention or design tasks are viewed as instances of problem solving. To invent or design is to describe an object that performs, in a range of environments, some desired function or serves some intended purpose; the process of arriving at the description is a problem-solving process. In problem solving, the desired object is characterized in two different ways. The problem statement or goal statement characterizes it as an object that satisfies certain criteria of structure and/or performance. The problem solution describes in concrete terms an object that satisfies these criteria. The problem statement specifies what needs to be done; the problem solution describes how to do it [9]. This distinction between the desired object and the achieved object, between problem statement and problem solution, is absolutely fundamental to the idea of solving a problem, for it resolves the paradox of Plato's Meno: How do we recognize the solution of a problem unless we already knew it in advance? The simple answer to Plato is that, although the problem statement does not define a solution, it contains the criteria for recognizing a solution, if and when found. Knowing and being able to apply the recognition test is not equivalent to knowing the solution. Being able to determine, for any given electrical circuit, whether it would operate, to a sufficiently good approximation, as a low-pass filter does not imply that one knows a design for a circuit that meets this condition. In asserting that we do not know the solution in advance, we must be careful to state accurately what the problem is. In theorem proving, for example, we may know, to the last detail, the expression we are trying to prove; what we do not know is what proof (what sequence of expressions, each following inferentially from the set of its predecessors) will terminate in the specified one. Wiles knew well the mathematical expression that is Fermat's last theorem; he spent seven years or more finding its proof. In the domain of theorem proving, the proof is the problem solution and the recognition criteria are the tests that determine whether each step in the proof follows from its predecessors and whether the proof terminates in the desired theorem.


Author(s):  
Arthur W. Burks

This is the story of how, in 1957, John Holland, a graduate student in mathematics; Gordon Peterson, a professor of speech; the present writer, a professor of philosophy; and several other Michigan faculty started a graduate program in Computers and Communications—with John our first Ph.D. and, I believe, the world's first doctorate in this now-burgeoning field. This program was to become the Department of Computer and Communication Sciences in the College of Literature, Science, and the Arts about ten years later. It had arisen also from a research group at Michigan on logic and computers that I had established in 1949 at the request of the Burroughs Adding Machine Company. When I first met John in 1956, he was a graduate of MIT in electrical engineering, and one of the few people in the world who had worked with the relatively new electronic computers. He had used the Whirlwind I computer at MIT [33], which was a process-control variant of the Institute for Advanced Study (IAS) Computer [27]. He had also studied the 1946 Moore School Lectures on the design of electronic computers, edited by George Patterson [58]. He had then gone to IBM and helped program its first electronic computer, the IBM 701, the first commercial version of the IAS Computer. While a graduate student in mathematics at Michigan, John was also doing military work at the Willow Run Research Laboratories to support himself. And 1 had been invited to the Laboratories by a former student of mine, Dr. Jesse Wright, to consult with a small research group of which John was a member. It was this meeting that led to the University's graduate program and then the College's full-fledged department. The Logic of Computers Group, out of which this program arose, in part, then continued with John as co-director, though each of us did his own research. This anomaly of a teacher of philosophy meeting an accomplished electrical engineer in the new and very small field of electronic computers needs some explanation, one to be found in the story of the invention of the programmable electronic computer. For the first three programmable electronic computers (the manually programmed ENIAC and the automatically programmed EDVAC and Institute for Advanced Study Computer) and their successors constituted both the instrumentation and the subject matter of our new Graduate Program in Computers and Communications.


Author(s):  
Kenneth De Jong

I continue to be surprised and pleased by the dramatic growth of interest in and applications of genetic algorithms (GAs) in recent years. This growth, in turn, has placed a certain amount of healthy "stress" on the field as current understanding and traditional approaches are stretched to the limit by challenging new problems and new areas of application. At the same time, other forms of evolutionary computation such as evolution strategies [50] and evolutionary programming [22], continue to mature and provide alternative views on how the process of evolution might be captured in an efficient and useful computational framework. I don't think there can be much disagreement about the fact that Holland's initial ideas for adaptive system design have played a fundamental role in the progress we have made in the past thirty years [23, 46]. So, an occasion like this is an opportunity to reflect on where the field is now, how it got there, and where it is headed. In the following sections, I will attempt to summarize the progress that has been made, and to identify critical issues that need to be addressed for continued progress in the field. The widespread availability of inexpensive digital computers in the 1960s gave rise to their increased use as a modeling and simulation tool by the scientific community. Several groups around the world including Rechenberg and Schwefel at the Technical University of Berlin [49], Fogel et al. at the University of California at Los Angeles [22], and Holland at the University of Michigan in Ann Arbor [35] were captivated by the potential of taking early simulation models of evolution a step further and harnessing these evolutionary processes in computational forms that could be used for complex computer-based problem solving. In Holland's case, the motivation was the design and implementation of robust adaptive systems, capable of dealing with an uncertain and changing environment. His view emphasized the need for systems which self-adapt over time as a function of feedback obtained from interacting with the environment in which they operate. This led to an initial family of "reproductive plans" which formed the basis for what we call "simple genetic algorithms" today, as outlined in figure 1.


Author(s):  
W. Brian Arthur

John Holland's ideas have always been marked by a deep instinct for the real—for what works. Thus, in his design of computer algorithms, he avoids mathematical formalisms and goes instead to deeper sources—to mechanisms drawn from biology. In his investigation of human cognitive thinking, he avoids frameworks based on deduction, logic, and choices over closed sets; and goes instead to induction, generative creation, and choices over open-ended possibilities. Running through all Holland's work, in fact, is an instinct for the generative and for the open-ended. Holland's worlds are ones where new entities are constantly created to be tested in the environment, and where these are not drawn from any closed and labeled collection of predetermined possibilities. This makes his science algorithmic rather than analytical, evolutionary rather than equilibriumbased, and novelty-generating rather than static. It makes his science, in a word, realistic. Insofar as the standard sciences are analytical, equilibrium-based, and nongenerative in their possibilities, Holland's thinking offers them a different approach. Here I want to see what a John Holland approach has to offer economics. My involvement with Holland's ideas began in the late summer of 1987. He and I were the first Visiting Fellows of the newly formed Santa Fe Institute, and we shared a house. I had taken up Holland's fascination with evolutionary algorithms, and, by 1988, John and I were attempting to design what was to become the first artificial stock market. It took me some time to realize that John Holland had thought deeply about a great deal more than evolutionary algorithms, and that he had interesting ideas also in psychology. Over the next thirteen years, I found myself applying Holland's thinking about cognition to problems within economics. At first it appeared that Holland's approach—based largely on induction—applied best to specific problems, and I tried to think of the simplest possible problem in economics that would illustrate the need for induction. The result was my El Farol bar problem. Later I began to realize that economics does not need inductive approaches to specific problems as much as it needs to reexamine the foundations of its assumptions about decision making.


Author(s):  
Douglas R. Hofstadter

More or less simultaneously in the closing year of the twentieth century, there appeared a curious coterie of books whose central, sensational-sounding claim was that humanity was on the verge of producing its own successors, thereby rendering itself both obsolete and superfluous. Chief among these books were The Age of Spiritual Machines by computer engineer and industrialist Ray Kurzweil, Robot by Carnegie-Mellon computer science professor Hans Moravec, and The Spike by technology writer Damien Broderick. There were several others that at least treated this theme seriously, such as Out of Control by Kevin Kelly, an editor at Wired magazine. The science-fiction tone of these books is clearly revealed by their subtitles: When Computers Exceed Human Intelligence (Kurzweil), Mere Machine to Transcendent Mind (Moravec), Accelerating into the Unimaginable Future (Broderick), and The Rise of Neobiological Civilization (Kelly). There would have been little reason for a serious reader to pay any attention to these books and their wild-sounding claims, had their authors not had the most respectable of credentials and had the books not been reviewed in the most serious of venues, often favorably. Thus, Kurzweil's and Moravec's books were reviewed together in the New York Times Sunday Book Review in January 1999, and although the reviewer, Rutgers University philosophy professor Colin McGinn, had some skeptical words to say about their views of consciousness, he essentially accepted all of their technical claims, which are extraordinary, at face value. Scientific American gave Moravec's book its glowing "Editors' Choice." On almost the same spring day of 2000 as Ray Kurzweil was receiving from the hands of President Clinton the National Medal of Technology for his pioneering efforts to help the handicapped through the use of computers, an apocalyptic reaction to the Kurzweil and Moravec books, written by the well-known computer scientist Bill Joy (a co-founder of Sun Microsystems), appeared as a cover article in Wired under the title "Why the Future Doesn't Need Us."


Author(s):  
Oliver G. Selfridge

This chapter will cover a general discussion of changes and improvement in software systems. Nearly all such systems are today programmed; that is, all the steps that the software should perform are specified ahead of time by the programmer. There are three areas of exception to that overwhelmingly usual practice: the first is an increasing (although still comparatively minute) effort still called machine learning; a second is a popular but ill-defined practice termed neural networks; and the third is evolutionary computation (or genetic algorithms), the kind that was invented by John Holland and which has been gathering momentum and success for some time. This chapter will focus on some special aspects of that evolutionary process, and we propose extensions to those techniques and approaches. The basic idea is to regard each evolutionary unit as a control structure; we then build complexity by controlling each unit with others, each subject to continuing adaptation and learning. The essence of the control unit is its triple soul in a kind of feedback loop: it has a power to act, that is, to exert some choice of action; it has a sensor to perceive and filter the response that is external to it; and it must evaluate that response to generate and influence its next control action. The general evolutionary or genetic system uses but a single evolutionary feedback—life on earth, for example, considers "survival" as its primary feedback. Here the generational improvements reside in the genotype, and are merely expressed in the individual organisms that are the successive programs. This chapter stresses the concept of control by evolving units; the essence of the control is the establishment of evaluation functions in other units. It is then useful to consider each evaluation function as a lower-level purpose. A piece of evolutionary software, in this way of looking at it, is then a complex expression of a purpose structure, and all the units evolve with separate and usually different purposes. The conceptual and linguistic vocabularies must then be established to deal with the many different kinds and levels of purposes. Higher-level purposes can be as general as moral values, and the lowest ones may be merely setpoints that control where muscles or motors are trying to go.


Author(s):  
David E. Goldberg

Noted complex adaptive system researcher John H. Holland now receives acclaim from many quarters, but it is important to understand that this man and his ideas have been controversial since the beginning of his career. Genetic algorithms (GAs) were ignored or disparaged throughout the 1960s and 1970s, and even now, as these and his other ideas receive worldwide recognition in broad outline, the specifics of his mode of thought and insight are rejected by many who claim to embrace his key insights. This is a mistake. I have known John Holland for 23 years, and I have learned many things from him, but a critical influence has been his style of thought, in particular, his style of modeling. John has an uncanny knack of getting to the heart of a matter through the construction of what I call little models. Sometimes these models are verbal, sometimes they are mathematical, but they almost always shed a great deal of light on some nagging question in the analysis and design of complex systems. In this chapter, I propose to briefly explore John Holland's style of little modeling, and better understand its nature, its essence, and why some of those who embrace the broad outlines of his teaching have been slow to embrace the details of his modeling and the style of his thought. The exploration begins by recalling my own first impressions of John Holland and his style of thought, impressions made 23 years ago in a classroom in Ann Arbor, Michigan. It continues with a case study in Holland-style facetwise model building in constructing a takeover time model. It continues by integrating the takeover time model with a model of innovation on dimensional grounds. Finally, the Hollandian mode of model building is placed on intellectual terra firma with an economic argument, suggesting that the costs of modeling or thought must be weighed in relation to the model's benefits in understanding or designing a complex system.


Author(s):  
Lashon Booker ◽  
Stephanie Forrest

It has long been known that the repeated or collective application of very simple rules can produce surprisingly complex organized behavior. In recent years several compelling examples have caught the public's eye, including chaos, fractals, cellular automata, self-organizing systems, and swarm intelligence. These kinds of approaches and models have been applied to phenomena in fields as diverse as immunology, neuroscience, cardiology, social insect behavior, and economics. The interdisciplinary study of how such complex behavior arises has developed into a new scientific field called "complex systems." The complex systems that most challenge our understanding are those whose behavior involves learning or adaptation; these have been named "complex adaptive systems." Examples of complex adaptive behavior include the brain's ability, through the collective actions of large numbers of neurons, to alter the strength of its own connections in response to experiences in an environment; the immune system's continual and dynamic protection against an onslaught of ever-changing invaders; the ability of evolving species to produce, maintain, and reshape traits useful to their survival, even as environments change; and the power of economic systems to reflect, in the form of prices, supplies, and other market characteristics, the collective preferences and desires of millions of distributed, independent individuals engaged in buying and selling. What is similar in these diverse examples is that global behavior arises from the semi-independent actions of many players obeying relatively simple rules, with little or no central control. Moreover, this global behavior exhibits learning or adaptation in some form, which allows individual agents or the system as a whole to maintain or improve the ability to make predictions about the future and act in accordance with these predictions. Traditional methods of science and mathematics have had limited success explaining (and predicting) such phenomena, and an increasingly common view in the scientific community is that novel approaches are needed, particularly those involving computer simulation. Understanding complex adaptive systems is difficult for several reasons. One reason is that in such systems the lowest level components (often called agents) not only change their behavior in response to the environment, but, through learning, they can also change the underlying rules used to generate their behavior.


Author(s):  
Julian Adams

Complexity in the living world can be seen at many different levels of organization. Even the simplest of free-living organisms possesses an extremely complex structure and metabolism [7], still incompletely understood in spite of concerted efforts over the last several decades by armies of molecular and cellular biologists. Populations of organisms can also be considered to possess their own intrinsic complexity, being comprised of assemblages of genetically different organisms. Although there may be a few exceptions (e.g., O'Brien and Wildt [11] and Cohn [2]), the genome of each member of a population can be considered to be genetically unique. One notable application of this observation has occurred in forensic science. In the last ten years or so, DNA typing, with its overwhelming power to identify individual members of a population, has been used in numerous criminal court cases to establish the guilt—or innocence—of defendants. A central issue in population genetics and evolutionary biology continues to be the explanation of the large amounts of genetic variability observed in natural populations of virtually all species examined. The search for mechanisms has mainly focused on patterns of selective differences (or lack thereof), which can maintain pre-existing variability in populations, and has largely ignored the more basic, but related question of the evolution of the more complex polymorphic state (genetic variation in populations) from the simpler condition of monomorphism (genetic uniformity). Simple population genetic theory has been remarkably unsuccessful in proposing plausible and global mechanisms which would result in such widespread variation. Heterozygous advantage is frequently invoked as a mechanism for the maintenance of genetic variation in populations of diploid sexually reproducing eukaryotes. However, the paucity of well-authenticated cases of overdominance, as well as theoretical difficulties implicit in the assumption of heterozygote superiority for many loci, make it unlikely as a general explanation for the maintenance of polymorphism. Furthermore, the luxury of explanations involving heterozygous advantage is not available for haploid and asexually reproducing species. Alternatively, "neutral" theory postulates that genetically different individuals in a population do not differ in their ability to survive and pass on their genes to future generations—that is, they possess identical "fitnesses." The abundance of examples of fitness differences between individuals makes such an explanation unlikely.


Sign in / Sign up

Export Citation Format

Share Document