Beyond Programming
Latest Publications


TOTAL DOCUMENTS

12
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195091601, 9780197560662

Author(s):  
Bruce I. Blum

Finally, I am about to report on my own research. In the material that preceded this chapter I have tried to present the work of others. My role has been closer to that of a journalist than a scientist. Because I have covered so much ground, my presentations may be criticized as superficial; the chapters left more unanswered than they have answered. Nevertheless, by the time the reader has reached this point, we should have a shared perception of the design process and its rational foundations. Perhaps I could have accomplished this with fewer pages or with greater focus. I did not choose that path because I wanted the reader to build a perspective of her own, a perspective in which my model of adaptive design (as well as many other alternative solutions) would seem reasonable. The environment for adaptive design that I describe in this chapther is quite old. work began on the project in 1980, and the environment was frozen in 1982. My software engineering research career began in 1985. Prior to that time I was paid to develop useful software products (i.e., applications that satisfy the sponsor’s needs). Since 1985 I have been supported by research funds to deliver research products (i.e., new and relevant knowledge). Of course, there is no clear distinction between my practitioner and research activities, and my research—despite its change in paradigm—has always had a strong pragmatic bias. Many of my software engineering research papers were published when I was developing applications, and my work at the Johns Hopkins Medical Institutions was accepted as research in medical informatics (i.e., how computer technology can assist the practice of medicine and the delivery of care). The approach described in this chapter emerged from attempts to improve the application of computers in medicine, and this is how I finally came to understand software development—from the perspective of complex, life-critical, open interactive information systems. There is relatively little in this chapter that has not already been published. The chapter integrates what is available in a number of overlapping (and generally unreferenced) papers. I began reporting on my approach before it was fully operational (Blum 1981), but that is not uncommon in this profession.


Author(s):  
Bruce I. Blum

Now that the foundation has been laid, I can turn to the principal concern of this book: software design. I use the word design in its most expansive sense. That is, design is contrasted with discovery; it encompasses all deliberate modifications of the environment, in this case modifications that employ software components. Thus, software design should not be interpreted as a phase in the development of a product— an activity that begins after some prerequisite is complete and that terminates with the acceptance of a work product. The context of software design in Part III is extended to include all aspects of the software process from the design of a response to a real-world need (which ultimately may be expressed as a requirements document) through the design of changes to the product (i.e., lifetime maintenance). This broader use of “design” can be confusing, and the reader may think of software design as the equivalent of the software process. In what follows, the goal is to discover the essential nature of software design, which I also shall refer to as the software process. what of the foundation constructed so laboriously during the first two parts of the book? It is not one of concrete and deep pilings. Rather it is composed of crushed rock. It can support a broad-based model of software design, but it may be unstable when it comes to specifics. The foundation has been chipped from the monolith of Positivism, of Technical Rationality. Its constituents are solid and cohesive models, but they defy unification and resist integration. we interpret them as science, technology, culture, philosophy, cognition, emotion, art; they comprise the plural realities from which we compose human knowledge. Unfortunately, my description of the foundation holds little promise of broad, general answers. Indeed, it suggests that science may be of limited help to design and that we may never discover the essence of design. That is, we must accept design as a human activity; whatever answers we may find will be valid within narrow domains where knowledge is determined by its context. Thus, Parts I and II prepare us to accept that the study of software design may not be amenable to systematic analysis.


Author(s):  
Bruce I. Blum

The purpose of this chapter is to evaluate TEDIUM. Evaluation is similar to correctness in that both are always with respect to some external criteria. what criteria should be used for evaluating an environment that develops and maintains software applications using a new paradigm? Clearly, the criteria of the old paradigm (e.g., lines of code, measures of complexity, effort distributed among phases) are irrelevant. In the early days of medical computing, Barnett playfully suggested the following three criteria for evaluating automated medical systems: . . . will people use it? will people pay for it? will people steal it? . . . At the time, the answers to first two questions frequently were negative, and Barnett’s pragmatic approach was intended to prod the field from theory to practice. TEDIUM is used and paid for, but its techniques have not been transported to other environments (i.e., it has not yet been stolen). I console myself by observing that a lack of recognition need not imply an absence of value. The transfer of ideas often is a product of the marketplace, where acceptance depends more on perception than on quantification. As we have seen throughout this book, there can be vast differences between what we care about and what is measurable. Real projects tend to be large, difficult to structure for comparative studies, and highly dependent on local conditions. In contrast, toy studies are easy to control and analyze, but they seldom scale up or have much creditability. How then should I evaluate TEDIUM? I have tried a number of strategies. I have analyzed small projects in detail, I have reported on standard problems comparing TEDIUM data with published results, I have presented and interpreted summary data taken from large projects, I have extracted evaluation criteria from other sources, and I have examined how TEDIUM alters the software process. All of this was summed up in TEDIUM and the Software Process (1990a).


Author(s):  
Bruce I. Blum

The previous chapter on the software process introduced two contrasting orientations: problem and product. Both orientations have the same objective: the efficient and effective creation of an automated response to a real-world problem. They differ only in the methods and tools used to achieve that end. In the product-oriented model, the difficulty of realizing a solution is accepted as the critical path. Thus, the approach applies the principle of separation of concerns and divides the process into two discrete activities: first establish the essential requirements of what is needed, and then build a product that will satisfy those requirements. As already noted, this model is appropriate when the requirements are stable or if the complexity of development is so great that a fixed specification is necessary to reduce risk. In contrast, the problemoriented model is valuable for real-world problems with open requirements (open both in the sense of initial uncertainty and of operational change). Unfortunately, it can be implemented only for domains in which the technology is relatively mature. For example, military applications that push the limits of technology have open requirements (i.e., they begin with uncertainty and are subject to modification as potential adversaries develop responses to newly introduced capabilities). In this domain, however, the technology may be too complex for development without frozen requirements. In other domains, such as interactive information systems, the technological challenges are sufficiently well understood to permit a problem-oriented approach with its one-step process model. The adaptive design paradigm proposed in this book is problem oriented. The instantiation I describe in the next two chapters creates interactive information systems. In principle, there is no reason why the adaptive design model may not be used for complex, real-time military applications; the fact that it has not been so used is a function of our knowledge of that domain and not a limitation of the paradigm. There always will be a fuzzy boundary about the technology that separates what we can and cannot do. At that boundary, we must rely on experimentation and hacking to gain understanding.


Author(s):  
Bruce I. Blum

we have arrived at the last layer of the foundation. I now can begin a systematic analysis of design. As a brief reminder, this really is a book about the development of software applications. My thesis is that we can expect only limited improvement to software application and productivity by working within the current design paradigm (i.e, technological design). I believe that we must shift paradigms to exploit the special characteristics of software. But paradigm shifts are revolutions, and one cannot comprehend any new paradigm by extrapolating from the concepts and methods of the present paradigm. Thus, we must destroy before we can rebuild. In the physical sciences, the justification for destruction comes from outside the paradigm; phenomena are observed that are at variance with the models of normal science, and new theories are needed to explain them. Computer science and software engineering, however, are formalizations for free phenomena. In a sense, they are self-defining; they establish their own criteria for relevance and evaluation. If we are to replace those criteria, therefore, we must begin outside normal computer science. And that is the justification for these first two parts. Part I examines the meaning and limitations of science and provides an interpretation of design: the modification of the environment (or “changing existing conditions into preferred ones”). Part II looks at design from the perspective of those who make and use the designs. I deliberately remain outside the domain of computer science in my search for the knowledge that will be relevant to the subsequent examination of software. Once this knowledge has been assembled, Part III can enter into a less biased consideration of software and its role in the next millennium. Thus, the first two parts construct the context within which a new computer science can be defined, and Part III offers adaptive design as an illustration of what this new computer science can accomplish. where are we now in this odyssey? Chapter 1 begins with the traditional view in which the maturity of software engineering as a discipline is related to its utilization of computer science principles.


Author(s):  
Bruce I. Blum

we are almost halfway through the book and this part on design ecology, and I have yet to talk about design, no less software engineering. Is this some kind of shaggy dog story? The kind in which the hero climbs mountains in search of the meaning of life only to have the wise man tell him it is, “The wet bird flies at night.” I hope not. Here is the basic thesis of the book. Computers offer unprecedented power in creating new tools (equipment), but to achieve their potential we must reconstruct how they are used (i.e., shift the paradigm). The first half of the book concerns the foundations upon which we may reconstruct a new software engineering. In the middle of this century, of course, there would be no question as to what that foundation should be: science. But, as I have been trying to show, science and our institutions are in a period of fundamental change. For example, consider what Prigogine, winner of the 1977 Nobel Prize for chemistry, has to say. . . . The classical ... view of science was to regard the world as an “object,” to try to describe the physical world as if it were being seen from the outside as an object of analysis to which we do not belong... The deterministic laws of physics, which were at one point the only acceptable laws, today seem like gross simplifications, nearly a caricature of evolution. . . . Even in physics, as in sociology, only various possible “scenarios” can be predicted. But it is for this very reason that we are participating in a fascinating adventure in which, in the words of Niels Bohr, we are “both spectators and actors.” (1980, pp. xv, xvii) . . . Thus, in only four decades we have moved from physicalism, which sought to impose a physics model on psychology, to a questioning of the very nature of physics itself. As Holland, a physicist, describes our present situation, “we are in a period of transition between two great world views—the universal machine of the classicists and the new holistic universe whose details we are only beginning to glimpse.


Author(s):  
Bruce I. Blum

Fifty years ago there were no stored-program binary electronic computers. Indeed, in the mid 1940s computer was a job description; the computer was a person. Much has happened in the ensuing half-century. whereas the motto of the 1950s was “do not bend, spindle, or mutilate,” we now have become comfortable with GUI wIMP (i.e., Graphic User Interface; windows, Icons, Mouse, and Pointers). whereas computers once were maintained in isolation and viewed through large picture windows, they now are visible office accessories and invisible utilities. whereas the single computer once was a highly prized resource, modern networks now hide even the machines’ geographic locations. Naturally, some of our perceptions have adapted to reflect these changes; however, much of our understanding remains bound to the concepts that flourished during computing’s formative years. For example, we have moved beyond thinking of computers as a giant brain (Martin 1993), but we still hold firmly to our faith in computing’s scientific foundations. The purpose of this book is to look forward and speculate about the place of computing in the next fifty years. There are many aspects of computing that make it very different from all other technologies. The development of the microchip has made digital computing ubiquitous; we are largely unaware of the computers in our wrist watches, automobiles, cameras, and household appliances. The field of artificial intelligence (AI) sees the brain as an organ with some functions that can be modeled in a computer, thereby enabling computers to exhibit “intelligent” behavior. Thus, their research seeks to extend the role of computers through applications in which they perform autonomously or act as active assistants. (For some recent overviews of AI see waldrop 1987; Crevier 1993.) In the domain of information systems, Zuboff (1988) finds that computers can both automate (routinize) and informate, that is, produce new information that serves as “a voice that symbolically renders events, objects, and processes so that they become visible, knowable, and sharable in a new way” (p. 9).


Author(s):  
Bruce I. Blum

I begin my exploration of ecological design by examining the nature of the individuals engaged in the design process. There are two principal concerns. First, design is a human activity that seeks to alter the human environment. Therefore, an understanding of human behavior in a problem-solving context is essential. In particular, we are interested in the extent the which the proposed solutions are rational (i.e., derivable through the use of clearly stated principles and formalisms) and accurate (i.e., descriptive of solutions that, when available, produce the desired results). Although there may be social and political biases that affect the decisions, the principal focus of this chapter is individual problem solving. How good are people at solving problems? How rational are they, and are there limits to rational analysis? Design depends on the interpretations of those who describe what is needed and those who create the desired products. It is essential that we know where we can be certain and where uncertainty is unavoidable. The second reason for studying human decision making complements the first; it has to do with the form of the final decision. Recall that the software process begins with an informal recognition of a need (in-theworld) and ends with the formal expression of a response to that need (in-the-computer). One of the products of science (and rational argument) is models of the world (i.e., instruments) that can be expressed in-the-computer. Progress is made as more instruments become available. Although the criterion of relevance may result in the disuse of some of the available instruments, clearly an increase in the number of potentially relevant instruments offers the possibility of improving productivity and/or enlarging the technology’s scope (i.e., the class of problems for which the models may be relevant). Therefore, we have a second set of questions to be answered. To what extent are there rational problem-solving mechanisms that can be modeled in the computer, and how may they be modeled? Are some decisions contextually determined, thereby requiring information never available in-the-computer? what are the limits in expressing design knowledge? This chapter đdoes not provide answers to these questions; I doubt that any consensus answers can be formulated.


Author(s):  
Bruce I. Blum

This book is about a paradigm shift in the development of software; a move to a new era of design for software (and, for that matter, all manufactured artifacts). The goal of Part I is to lay out the scientific and technological foundations for this new era of design. In the chapter just concluded, we have seen how the Legend of science has become tarnished. During the stored-program computer’s brief history, we observe the general perceptions of science and scientific knowledge undergoing a fundamental change. I have focused on the philosophy of science because that tells us something about the theoretical limits of science; it suppresses the details of the day-to-day conduct of science that make it such a successful enterprise. This reassessment of science, of course, has been independent of the growth of computing; indeed, my examination has been free of any technological considerations. From the perspective of computer science, much of this revolution has gone unnoticed. Many still walk in the pathways first laid out in the era of the Legend; some even try to fit computer science into the framework of the Received View. If the conclusions of Chapter 2 are valid, however, such approaches cannot be sustained indefinitely. Therefore, any response to the evolving understanding of science ultimately must lead to a reexamination of computer science. If we are to shift the software design paradigm, we must expect modifications to the underlying principles embedded in computer science. How will these changes take place? will there be new scientific findings that alter the technology, or will a shift in the technology modify what the computer scientists study? To gain insight into the answers to these questions, this chapter addresses the relationship between science and technology and, in particular, between computer science and software engineering. As in the previous chapter, I conduct a broadly based, general review. The traditional relationship between science and engineering normally is described as being causal. Science creates knowledge, and technology consumes knowledge. This has been depicted as an assembly line: “Put money into pure science at the front end of the process.


Author(s):  
Bruce I. Blum

This chapter presents an overview of the philosophy of science. why study this philosophy? Here is my justification. we know that the software process is a transformation from the identification of a need in-the- world into a set of computer programs that operate in-the-computer. The process begins with an idea, a concept, something that may defy a complete description, and it ends with the delivery of a formal model that executes in the computer. As we have seen, there is a fundamental tension in this transformation, a tension between what we want and how we make it work, between the requirements in-the-world and their realization in-the-computer, between the subjective and the objective, the conceptual and the formal. This book seeks to resolve that tension. Science faces a similar problem, and so I start by examining its solutions. Science begins with something very complex and poorly represented—the real world—and its goal is to describe aspects of that reality with theories and models. we know that science is successful. It is reasonable to look, therefore, into its strengths and limitations for insight into resolving the software process’ central tension. To gain this insight, I turn to the philosophy of science because it constitutes a kind of meta-science. It examines the nature of science from a theoretical perspective; it helps us appreciate what is knowable and what can be represented formally. I note at the outset, this is not my area of expertise. Moreover, the philosophers of science have not reached a consensus. Philosophical inquiry is, by its very nature, controversial and argumentative, and the theme of this chapter is the underlying controversies regarding the nature of science and scientific knowledge. If we are to find “scientific foundations,” then we must first understand what science is (and is not)—the topic of what follows. I warn the reader that this chapter conforms to truth in labeling; as its title indicates, it literally is about the philosophy of science. There are a few explanatory comments that tie the material to the immediate needs of a software engineer, but this really is a chapter about philosophy.


Sign in / Sign up

Export Citation Format

Share Document