The Second Age of Computer Science
Latest Publications


TOTAL DOCUMENTS

7
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780190843861, 9780197559826

Author(s):  
Subrata Dasgupta

At first blush, computing and biology seem an odd couple, yet they formed a liaison of sorts from the very first years of the electronic digital computer. Following a seminal paper published in 1943 by neurophysiologist Warren McCulloch and mathematical logician Warren Pitts on a mathematical model of neuronal activity, John von Neumann of the Institute of Advanced Study, Princeton, presented at a symposium in 1948 a paper that compared the behaviors of computer circuits and neuronal circuits in the brain. The resulting publication was the fountainhead of what came to be called cellular automata in the 1960s. Von Neumann’s insight was the parallel between the abstraction of biological neurons (nerve cells) as natural binary (on–off) switches and the abstraction of physical computer circuit elements (at the time, relays and vacuum tubes) as artificial binary switches. His ambition was to unify the two and construct a formal universal theory. One remarkable aspect of von Neumann’s program was inspired by the biology: His universal automata must be able to self-reproduce. So his neuron-like automata must be both computational and constructive. In 1955, invited by Yale University to deliver the Silliman Lectures for 1956, von Neumann chose as his topic the relationship between the computer and the brain. He died before being able to deliver the lectures, but the unfinished manuscript was published by Yale University Press under the title The Computer and the Brain (1958). Von Neumann’s definitive writings on self-reproducing cellular automata, edited by his one-time collaborator Arthur Burks of the University of Michigan, was eventually published in 1966 as the book Theory of Self-Reproducing Automata. A possible structure of a von Neumann–style cellular automaton is depicted in Figure 7.1. It comprises a (finite or infinite) configuration of cells in which a cell can be in one of a finite set of states. The state of a cell at any time t is determined by its own state and those of its immediate neighbors in the preceding point of time t – 1, according to a state transition rule.


Author(s):  
Subrata Dasgupta

When Caxton Foster of the University of Massachusetts published his book Computer Architecture in 1970, this term was only just being recognized, reluctantly, by the computing community. This despite an influential paper published in 1964 by a group of IBM engineers on the “Architecture of the IBM System/360.” For instance, ACM’s “Curriculum 68” made no mention of the term in its elaborate description of the entire scope of computing as an academic discipline. Rather, in the late 1960s and well into the ’70s terms such as computer organization, computer structures, logical organization, computer systems organization, or, most blandly, computer design were preferred to describe computers in an abstract sort of way, independent of the physical (hardware) details. Thus a widely referenced paper by Michael Flynn of Stanford University, published in 1974, was titled “Trends and Problems in Computer Organization.” And Maurice Wilkes, even in the third edition of his Time-Sharing Computer Systems (1975) declined to use the term computer architecture. Yet, computer architecture as both an abstract way of looking at, understanding, and designing computers, and as a field of computer science emerged in the first years of the ’70s. The Institute of Electrical and Electronics Engineers (IEEE) founded a Technical Committee on Computer Architecture (TCCA) in 1970 to join the ranks of other specialist IEEE TCs. The Association for Computing Machinery (ACM) followed suit in 1971 by establishing, alongside other special-interest groups, the Special Interest Group on Computer Architecture (SIGARCH). And in 1974, the first of what came to be the annual International Symposium on Computer Architecture (ISCA) was held in Gainesville, Florida. By the end of the decade a series of significant textbooks and articles bearing the term computer architecture(s) had appeared. The reason for naming an aspect of the computer its “architecture” and the reason for naming an academic and research discipline “computer architecture” can be traced back to the mid-1940s and the paradigm-shaping unpublished reports by John von Neumann of the Institute of Advanced Study, Princeton, and his collaborators, Arthur Burks and Herman Goldstine.


Author(s):  
Subrata Dasgupta

Human Problem Solving (1972) by Allen Newell and Herbert Simon of Carnegie-Mellon University, a tome of over 900 pages, was the summa of some 17 years of research by Newell, Simon, and their numerous associates (most notably Cliff Shaw, a highly gifted programmer at Rand Corporation) into “how humans think.” “How humans think” of course belonged historically to the psychologists’ turf. But what Newell and Simon meant by their project of “understanding . . . how humans think” was very different from how psychologists envisioned the problem before these two men invaded their milieu in 1958 with a paper on human problem solving in the prestigious Psychological Review. Indeed, professional psychologists must have looked at them askance. Neither was formally trained in psychology. Newell was originally trained as a mathematician, Simon as a political scientist. They both disdained disciplinary boundaries. Their curricula vitae proclaimed loudly their intellectual heterodoxy. At the time Human Problem Solving was published, Newell’s research interests straddled artificial intelligence, computer architecture, and (as we will see) what came to be called cognitive science. Simon’s multidisciplinary creativity—his reputation as a “Renaissance man”—encompassing administrative theory, economics, sociology, cognitive psychology, computer science, and the philosophy of science—was of near-mythical status by the early 1970s. Yet, for one prominent historian of psychology it would seem that what Newell and Simon did had nothing to do with the discipline: the third edition of Georgetown University psychologist Daniel N. Robinson’s An Intellectual History of Psychology (1995) makes no mention of Newell or Simon. Perhaps this was because, as Newell and Simon explained, their study of thinking adopted a pointedly information processing perspective. Information processing: Thus entered the computer into this conversation. But, Newell and Simon hastened to clarify, they were not suggesting a metaphor of humans as computers. Rather, they would propose an information processing system (IPS) that would serve to describe and explain how humans “process task-oriented symbolic information.” In other words, human problem solving, in their view, is an instance of representing information as symbols and processing them.


Author(s):  
Subrata Dasgupta

Every morning the first thing that X does is make tea for herself. She first turns on the stove and then while the stove ring is heating up, she pours water from the faucet into the kettle. She then places the kettle on the stove ring, now nicely hot, and while the water is being heated she puts tea bags into the teapot; she then pours milk from the milk carton into a milk jug and then puts the milk jug into the microwave oven. After the water starts to boil she pours water into teapot. And while the tea “gathers strength” in the teapot, she presses the time button on the microwave to start warming the milk. After the milk is warmed she first pours tea from the teapot into a teacup and then adds milk from the warmed milk jug to the tea in the cup. This tiny, humdrum, comforting, domestic scenario X enacts every morning has many (but not all) of the ingredients of a situation that involves the scope and limits of parallel processing. More precisely, the art of making tea as practiced by X entails a blend of both sequential and parallel events. We note that certain events can take place in parallel (or concurrently) because they do not interfere with one another; for example, the heating of the stove and the pouring of water into the kettle. But other events must be sequentially ordered either because they interfere with one another or because one event must complete before the other can begin. The kettle can be placed on the stove ring only after it has been filled with water; water can be poured into the teapot only after the water has boiled. But notice also that there is some flexibility in the ordering of X’s actions. She can defer turning on the stove until after the kettle is placed on the stove ring; she can alter the ordering of pouring water into the teapot and placing teabags into the pot; she could defer warming the milk in the microwave until the tea has brewed.


Author(s):  
Subrata Dasgupta

Creative people are driven by certain inner forces, inner needs that are part cognitive, part affective. One such force is intellectual curiosity: the need to know or understand. Another compelling drive is dissatisfaction with the status quo. We saw this as the force that impelled Nicklaus Wirth into creating Pascal (Chapter 1, Section 1.7). But few in the emerging computer science community of the first age of computer science epitomized this characteristic more fiercely than Edsger W. Dijkstra. In his case his discontent was with the direction programming had taken in the 1960s. And the strength of his dissat­isfaction was never more evident than in a letter to the editor of the Communications of the ACM in 1968. The practice of communicating new scientific results by their discoverers in the form of compact letters to the editors of scientific journals was, of course, well established in the natural sciences. The British journal Nature (London) had established this tradition right from its inaugural issue in 1869. But in an upstart discipline, as computer science still was, this practice as a means of scientific communication was quite un­usual. (In one of his celebrated handwritten “EWD notes,” Dijkstra, reflecting retrospectively, explained that his short paper was published as a letter to bypass the usual publication pipeline and that the editor who made this decision was Nicklaus Wirth.) Dijkstra had long been concerned with the question of program quality and how one may acquire confidence in the reliability or correctness of a program. But, as the title of the letter— “Goto Statement Considered Harmful”— tells us, the object of his discontent lay in the use of the goto statement— the unconditional branch available in one notation or another in most programming languages, including Algol-like ones. Dijkstra claimed that the quality of the programmers decreased as a function of the frequency of the goto statements in their programs. And so he proposed that the goto should be banished from all high- level programming languages.


Author(s):  
Subrata Dasgupta

In 1969 a “Report on the Algorithmic Language ALGOL 68” was published in the journal Numerische Mathematik. The authors of the report were also its designers, all academic computer scientists, Adriaan van Wijngaarden and C. H. A. Koster from the Netherlands and Barry Mailloux and John Peck from Canada. The Algol 68 project was, by then, 4 years old. The International Federation for Information Processing (IFIP) had under its umbrella a number of technical committees devoted to various specialties; each technical committee in turn had, under its jurisdiction, several working groups given to subspecialties. One such committee was the technical committee TC2, on programming; and in 1965 one of its constituent working groups WG2.1 (programming languages) mandated the development of a new international language as a successor to Algol 60. The latter, developed by an international committee of computer scientists between 1958 and 1963, had had considerable theoretical and practical impact in the first age of computer science. The Dutch mathematician-turned-computer scientist Adriaan van Wijngaarden, one of the codesigners of Algol 60 was entrusted with heading this task. The goal for Algol 68 was that it was to be a successor of Algol 60 and that it would have to be accepted and approved by IFIP as the “official” international programming language. Prior to its publication in 1969, the language went through a thorough process of review, first within the ranks of WG2.1, then by its umbrella body TC2, and finally by the IFIP General Assembly before being officially recommended for publication. The words review and recommendation mask the fact that the Algol 68 project manifested some of the features of the legislative process with its attendant politics. Thus, at a meeting of WG2.1 in Munich in December 1968— described by one of the Algol 68 codesigners John Peck as “dramatic”— where the Algol 68 report was to be approved by the working group, the designers presented their language proposal much as a lawmaker presents a bill to a legislative body; and just as the latter debates over the bill, oftentimes acrimoniously, before putting the bill to a vote, so also the Algol 68 proposal was debated over by members of WG2.1 and was finally voted on.


Author(s):  
Subrata Dasgupta

If social and behavioral scientists have harbored “physics envy” as some have wryly claimed—envy of its explanatory and predictive success— then computer scientists may be said to have suffered from “mathematics envy.” Interestingly, this envy was less a characteristic of the pioneers of digital computing of the 1940s and 1950s, the people who shed first light on the design of digital electronic computers, the first programming languages, the first operating systems, the first language translators, and so on—though most of them were trained as mathematicians. They were too busy learning the heuristic principles of computational artifacts. Rather, it was in the 1960s when we first find signs of a kind of mathematics envy, at least in some segments of the embryonic computer science community. It was as if, having discovered (or invented) the heuristic principles of practical computational artifacts, some felt the need to understand the underlying “science” of these artifacts—by which they meant its underlying mathematics and logic. Mathematics envy could be assuaged only by thinking mathematically about computational artifacts. Computer science would then be raised to the intellectual stature of, say, physics or indeed of mathematics itself if computer scientists could transform their discipline into a mathematical science. One cannot blame computer scientists who thought this way. The fact is, there is something about mathematics that situates it in a world of its own. “Mathematics is a unique aspect of human thought,” wrote hyperprolific science (fact and fiction) writer Isaac Asimov. And Asimov was by no means the first or only person to think so. But wherein lies the uniqueness of mathematical thinking? Perhaps the answer is that for many people, mathematics offers the following promises:The unearthliness of mathematical objects. The perfectness and exactness of mathematical concepts. An inexorable rigor of mathematical reasoning. The certainty of mathematical knowledge. The self-sufficiency of the mathematical universe. These promises are clearly enviable if they can be kept; usually, they are kept.


Sign in / Sign up

Export Citation Format

Share Document