It Began with Babbage
Latest Publications


TOTAL DOCUMENTS

16
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780199309412, 9780197562857

Author(s):  
Subrata Dasgupta

Let us rewind the historical tape to 1945, the year in which John von Neumann wrote his celebrated report on the EDVAC (see Chapter 9 ). That same year, George Polya (1887–1985), a professor of mathematics at Stanford University and, like von Neumann, a Hungarian-American, published a slender book bearing the title How to Solve It. Polya’s aim in writing this book was to demonstrate how mathematical problems are really solved. The book focused on the kinds of reasoning that go into making discoveries in mathematics—not just “great” discoveries by “great” mathematicians, but the kind a high school mathematics student might make in solving back-of-the-chapter problems. Polya pointed out that, although a mathematical subject such as Euclidean geometry might seem a rigorous, systematic, deductive science, it is also experimental or inductive. By this he meant that solving mathematical problems involves the same kinds of mental strategies—trial and error, informed guesswork, analogizing, divide and conquer— that attend the empirical or “inductive” sciences. Mathematical problem solving, Polya insisted, involves the use of heuristics—an Anglicization of the Greek heurisko —meaning, to find. Heuristics, as an adjective, means “serving to discover.” We are oft en forced to deploy heuristic reasoning when we have no other options. Heuristic reasoning would not be necessary if we have algorithms to solve our problems; heuristics are summoned in the absence of algorithms. And so we seek analogies between the problem at hand and other, more familiar, situations and use the analogy as a guide to solve our problem, or we split a problem into simpler subproblems in the hope this makes the overall task easier, or we summon experience to bear on the problem and apply actions we had taken before with the reasonable expectation that it may help solve the problem, or we apply rules of thumb that have worked before. The point of heuristics, however, is that they offer promises of solution to certain kinds of problems but there are no guarantees of success. As Polya said, heuristic thinking is never considered as final, but rather is provisional or plausible.


Author(s):  
Subrata Dasgupta

In February 1951, the Ferranti Mark I was delivered to the University of Manchester. This was the commercial “edition” of the Manchester Mark I (see Chapter 8, Section XIII), the product of a collaboration between town and gown, the former being the Manchester firm of Ferranti Limited. It became (by a few months) the world’s first commercially available digital computer (followed in June 1951 by the “Universal Automatic Computer” [UNIVAC], developed by the Eckert-Mauchly Computer Corporation). The Ferranti Mark I was unveiled formally at an inaugural conference held in Manchester, June 9 to 12, 1951. At this conference, Maurice Wilkes delivered a lecture titled “The Best Way to Design an Automatic Calculating Machine.” This conference is probably (perhaps unfairly) more known because of Wilkes’s lecture than for its primary focus, the Ferranti Mark I. For during this lecture, Wilkes announced a new approach to the design of a computer’s control unit called microprogramming, which would be massively consequential in the later evolution of computers. Wilkes’s lecture also marked something else: the search for order, structure, and simplicity in the design of computational artifacts; and an attendant concern for, a preoccupation with, the design process itself in the realm of computational artifacts. We have already seen the first manifestations of this concern with the design process in the Goldstine-von Neumann invention of a flow diagram notation for beginning the act of computer programming (see Chapter 9, Section III), and in David Wheeler’s and Stanley Gill’s discussions of a method for program development (Chapter 10, Section IV). Wilkes’s lecture was notable for “migrating” this concern into the realm of the physical computer itself. We recall that, in May 1949, the Cambridge EDSAC became fully operational (see Chapter 8, Section XIII). The EDSAC was a serial machine in that reading from or writing into memory was done 1 bit at a time (bit serial) ; and, likewise, the arithmetic unit performed its operations in a bit-by-bit fashion. Soon after the EDSAC’s completion, while others in his laboratory were busy refining the programming techniques and exploring its use in scientific applications (see Chapter 9, Sections V–VIII; and Chapter 10), Wilkes became preoccupied with issues of regularity and complexity in computer design and their relation to reliability.


Author(s):  
Subrata Dasgupta

The 1940s witnessed the appearance of a handful of scientists who, defying the specialism characteristic of most of 20th-century science, strode easily across borders erected to protect disciplinary territories. They were people who, had they been familiar with the poetry of the Nobel laureate Indian poet–philosopher Rabindranath Tagore (1861– 1941), would have shared his vision of a “heaven of freedom”: . . .Where the world has not been broken up into fragments by narrow domestic walls. . . . Norbert Wiener (1894–1964), logician, mathematician, and prodigy, who was awarded a PhD by Harvard at age 17, certainly yearned for this heaven of freedom in the realm of science as the war-weary first half of the 20th century came to an end. He would write that he and his fellow scientist and collaborator Arturo Rosenbluth (1900–1970) had long shared a belief that, although during the past two centuries scientific investigations became increasingly specialized, the most “fruitful” arenas lay in the “no-man’s land” between the established fields of science. There were scientific fields, Wiener remarked, that had been studied from different sides, each bestowing its own name to the field, each ignorant of what others had discovered, thus creating work that was “triplicated or quadruplicated” because of mutual ignorance or incomprehension. Wiener, no respecter of “narrow domestic walls” would inhabit such “boundary regions” between mathematics, engineering, biology, and sociology, and create cybernetics, a science devoted to the study of feedback systems common to living organisms, machines, and social systems. Here was a science that straddled the no-man’s land between the traditionally separate domains of the natural and the artificial. Wiener’s invention of cybernetics after the end of World War II was a marker of a certain spirit of the times when, in the manner in which Wiener expressed his yearning, scientists began to create serious links between nature and artifact. It is inevitable that this no-man’s land between the natural and the artificial should be part of this story.


Author(s):  
Subrata Dasgupta

The story so far has been a narrative about the development of two very contrasting types of computational artifacts. On the one hand, Alan Turing conceived the idea of a purely abstract and formal artifact—the Turing machine—having no physical reality whatsoever, an artifact that belongs to the same realm of symbols and symbol manipulation, as do mathematical objects. On the other hand, the major part of this narrative has been concerned with a material artifact, the computer as a physical machine that, ultimately, must obey the laws of physics—in particular, the laws governing electromagnetism and mechanics. This was as true for Babbage’s machines (which were purely mechanical) as for Hollerith’s tabulator, as true for the electromechanical machines, as for the Harvard Mark I and the Bell Telephone computers, as true for the ABC and the ENIAC, as for the EDSAC and the Manchester Mark I. Beginning with the EDVAC report, and especially manifest in the development of the first operational stored-program computers, was the dawning awareness of a totally new kind of artifact, the likes of which had never been encountered before. Philosophers speak of the ontology of something to mean the essential nature of that thing, what it means to be that thing. The ontology of this new kind of artifact belonged neither to the familiar realm of the physical world nor the equally familiar realm of the abstract world. Rather, it had characteristics that looked toward both the physical and the abstract. Like Janus, the Roman god of gates, it looked in two opposite directions: a two-faced artifact—which, as we will see, served as the interface between the physical and the abstract, between the human and the automaton; a liminal artifact, hovering ontologically between and betwixt the material and the abstract (see Prologue, Section IV ). So uncommon was this breed that even a name for it was slow to be coined. During the Cambridge conference in England in 1949, we find a session devoted to programming and coding.


Author(s):  
Subrata Dasgupta

By the end of World War II, independent of one another (and sometimes in mutual ignorance), a small assortment of highly creative minds—mathematicians, engineers, physicists, astronomers, and even an actuary, some working in solitary mode, some in twos or threes, others in small teams, some backed by corporations, others by governments, many driven by the imperative of war—had developed a shadowy shape of what the elusive Holy Grail of automatic computing might look like. They may not have been able to define a priori the nature of this entity, but they were beginning to grasp how they might recognize it when they saw it. Which brings us to the nature of a computational paradigm. Ever since the historian and philosopher of science Thomas Kuhn (1922–1996) published The Structure of Scientific Revolutions (1962), we have all become ultraconscious of the concept and significance of the paradigm, not just in the scientific context (with which Kuhn was concerned), but in all intellectual and cultural discourse. A paradigm is a complex network of theories, models, procedures and practices, exemplars, and philosophical assumptions and values that establishes a framework within which scientists in a given field identify and solve problems. A paradigm, in effect, defines a community of scientists; it determines their shared working culture as scientists in a branch of science and a shared mentality. A hallmark of a mature science, according to Kuhn, is the emergence of a dominant paradigm to which a majority of scientists in that field of science adhere and broadly, although not necessarily in detail, agree on. In particular, they agree on the fundamental philosophical assumptions and values that oversee the science in question; its methods of experimental and analytical inquiry; and its major theories, laws, and principles. A scientist “grows up” inside a paradigm, beginning from his earliest formal training in a science in high school, through undergraduate and graduate schools, through doctoral work into postdoctoral days. Scientists nurtured within and by a paradigm more or less speak the same language, understand the same terms, and read the same texts (which codify the paradigm).


Author(s):  
Subrata Dasgupta

The German mathematician Gottfried Wilhelm Leibniz (1646–1716) is perhaps best remembered in science as the co-inventor (with Newton) of the differential calculus. In our story, however, he has a presence not so much because, like his great French contemporary the philosopher Blaise Pascal (1623–1662), he built a calculating machine—in Pascal’s case, the machine could add and subtract, whereas Leibniz’s machine also performed multiplication and division—but for something he wrote vis-à-vis calculating machines. He wished that astronomers could devote their time strictly to astronomical matters and leave the drudgery of computation to machines, if such machines were available. Let us call this Leibniz’s theme, and the story I will tell here is a history of human creativity built around this theme. The goal of computer science, long before it came to be called by this name, was to delegate the mental labor of computation to the machine. Leibniz died well before the beginning of the Industrial Revolution, circa 1760s, when the cult and cultivation of the machine would transform societies, economies, and mentalities. The pivot of this remarkable historical event was steam power. Although the use of steam to move machines automatically began with the English ironmonger and artisan Thomas Newcomen (1663–1727) and his invention of the atmospheric steam engine in 1712, just 4 years before Leibniz’s passing, the steam engine as an efficient source of mechanical power, as an efficient means of automating machinery, as a substitute for human, animal, and water power properly came into being with the invention of the separate condenser in 1765 by Scottish instrument maker, engineer, and entrepreneur James Watt (1738–1819)—a mechanism that greatly improved the efficiency of Newcomen’s engine. The steam engine became, so to speak, the alpha and omega of machine power. It was the prime mover of ancient Greek thought materialized. And Leibniz’s theme conjoined with the steam engine gave rise, in the minds of some 19th-century thinkers, to a desire to automate calculation or computation and to free humans of this mentally tedious labor.


Author(s):  
Subrata Dasgupta

In 1962, purdue University in West Lafayette, Indiana, in the United States opened a department of computer science with the mandate to offer master’s and doctoral degrees in computer science. Two years later, the University of Manchester in England and the University of Toronto in Canada also established departments of computer science. These were the first universities in America, Britain, and Canada, respectively, to recognize a new academic reality formally—that there was a distinct discipline with a domain that was the computer and the phenomenon of automatic computation. There after, by the late 1960s—much as universities had sprung up all over Europe during the 12th and 13th centuries after the founding of the University of Bologna (circa 1150) and the University of Paris (circa 1200)—independent departments of computer science sprouted across the academic maps on North America, Britain, and Europe. Not all the departments used computer science in their names; some preferred computing, some computing science, some computation. In Europe non-English terms such as informatique and informatik were used. But what was recognized was that the time had come to wean the phenomenon of computing away from mathematics and electrical engineering, the two most common academic “parents” of the field; and also from computer centers, which were in the business of offering computing services to university communities. A scientific identity of its very own was thus established. Practitioners of the field could call themselves computer scientists. This identity was shaped around a paradigm. As we have seen, the epicenter of this paradigm was the concept of the stored-program computer as theorized originally in von Neumann’s EDVAC report of 1945 and realized physically in 1949 by the EDSAC and the Manchester Mark I machines (see Chapter 8 ). We have also seen the directions in which this paradigm radiated out in the next decade. Most prominent among the refinements were the emergence of the historically and utterly original, Janus-faced, liminal artifacts called computer programs, and the languages—themselves abstract artifacts—invented to describe and communicate programs to both computers and other human beings.


Author(s):  
Subrata Dasgupta

On February 15, 1946, a giant of a machine called the ENIAC, an acronym for Electronic Numerical Integrator And Computer, was commissioned at a ceremony at the Moore School of Electrical Engineering at the University of Pennsylvania, Philadelphia. The name is noteworthy. We see that the word computer—to mean the machine and not the person—had cautiously entered the emerging vocabulary of computer culture. Bell Laboratories named one of its machines Complex Computer; another, Ballistic Computer (see Chapter 5, Section I ). Still, the embryonic world of computing was hesitant; the terms “calculator”, “calculating machine”, “computing machine”, and “computing engine” still prevailed. The ENIAC’s full name (which, of course, would never be used after the acronym was established) seemed, at last, to flaunt the fact that this machine had a definite identity, that it was a computer. The tale of the ENIAC is a fascinating tale in its own right, but it is also a very important tale. Computer scientists and engineers of later times may be ignorant about the Bell Laboratories machines, they may be hazy about the Harvard Mark series, they may have only an inkling about Babbage’s dream machines, but they will more than likely have heard about the ENIAC. Why was this so? What was it about the ENIAC that admits its story into the larger story? It was not the first electronic computer; the Colossus preceded the ENIAC by 2 years. True, no one outside the Bletchley Park community knew about the Colossus, but from a historical perspective, for historians writing about the state of computing in the 1940s, the Colossus clearly took precedence over the ENIAC. In fact (as we will soon see), there was another electronic computer built in America that preceded the ENIAC. Nor was the ENIAC the first programmable computer. Zuse’s Z3 and Aiken’s Harvard Mark I, as well as the Colossus, well preceded the ENIAC in this realm. As for that other Holy Grail, general purposeness, this was, as we have noted, an elusive target (see Chapter 6, Section III).


Author(s):  
Subrata Dasgupta

In 1900, the celebrated German mathematician David Hilbert (1862–1943), professor of mathematics in the University of Göttingen, delivered a lecture at the International Mathematics Congress in Paris in which he listed 23 significant “open” (mathematicians’ jargon for “unsolved”) problems in mathematics. Hilbert’s second problem was: Can it be proved that the axioms of arithmetic are consistent? That is, that theorems in arithmetic, derived from these axioms, can never lead to contradictory results? To appreciate what Hilbert was asking, we must understand that in the fin de siècle world of mathematics, the “axiomatic approach” held sway over mathematical thinking. This is the idea that any branch of mathematics must begin with a small set of assumptions, propositions, or axioms that are accepted as true without proof. Armed with these axioms and using certain rules of deduction, all the propositions concerning that branch of mathematics can be derived as theorems. The sequence of logically derived steps leading from axioms to theorems is, of course, a proof of that theorem. The axioms form the foundation of that mathematical system. The axiomatic development of plane geometry, going back to Euclid of Alexandria (fl . 300 BCE ) is the oldest and most impressive instance of the axiomatic method, and it became a model of not only how mathematics should be done, but also of science itself. Hilbert himself, in 1898 to 1899, wrote a small volume titled Grundlagen der Geometrie (Foundations of Geometry) that would exert a major influence on 20th-century mathematics. Euclid’s great work on plane geometry, Elements, was axiomatic no doubt, but was not axiomatic enough. There were hidden assumptions, logical problems, meaningless definitions, and so on. Hilbert’s treatment of geometry began with three undefined objects—point, line, and plane—and six undefined relations, such as being parallel and being between. In place of Euclid’s five axioms, Hilbert postulated a set of 21 axioms. In fact, by Hilbert’s time, mathematicians were applying the axiomatic approach to entire branches of mathematics.


Author(s):  
Subrata Dasgupta

It must have been entirely coincidental that two remarkable linguistic movements both occurred during the mid 1950s—one in the realm of natural language, the other in the domain of the artificial; the one brought about largely by a young linguist named Noam Chomsky (1928–), the other initiated by a new breed of scientists whom we may call language designers; the one affecting linguistics so strongly that it would be deemed a scientific revolution, the other creating a class of abstract artifacts called programming languages and also enlarging quite dramatically the emerging paradigm that would later be called computer science. As we will see, these two linguistic movements intersected in a curious sort of way. In particular, we will see how an aspect of Chomskyan linguistics influenced computer scientists far more profoundly than it influenced linguists. But first things first: concerning the nature of the class of abstract artifacts called programming languages. There is no doubt that those who were embroiled in the design of the earliest programmable computers also meditated on a certain goal: to make the task of programming a computer as natural as possible from the human point of view. Stepping back a century, we recall that Ada, Countess of Lovelace specified the computation of Bernoulli numbers in an abstract notation far removed from the gears, levers, ratchets, and cams of the Analytical Engine (see Chapter 2, Section VIII ). We have seen in the works of Herman Goldstine and John von Neumann in the United States, and David Wheeler in England that, even as the first stored-program computers were coming into being, eff orts were being made to achieve the goal just mentioned. Indeed, a more precise statement of this goal was in evidence: to compose computer programs in a more abstract form than in the machine’s “native” language. The challenge here was twofold: to describe the program (or algorithm) in such a language that other humans could comprehend, without knowing much about the computer for which the program was written—in other words, a language that allowed communication between the writer of the program and other (human) readers—and also to communicate the program to the machine in such fashion that the latter could execute the program with minimal human intervention.


Sign in / Sign up

Export Citation Format

Share Document