Acceptable Evidence
Latest Publications


TOTAL DOCUMENTS

12
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195089295, 9780197560556

Author(s):  
Deborah G. Mayo

In this chapter I shall discuss what seems to me to be a systematic ambiguity running through the large and complex risk-assessment literature. The ambiguity concerns the question of separability: can (and ought) risk assessment be separated from the policy values of risk management? Roughly, risk assessment is the process of estimating the risks associated with a practice or substance, and risk management is the process of deciding what to do about such risks. The separability question asks whether the empirical, scientific, and technical questions in estimating the risks either can or should be separated (conceptually or institutionally) from the social, political, and ethical questions of how the risks should be managed. For example, is it possible (advisable) for risk-estimation methods to be separated from social or policy values? Can (should) risk analysts work independently of policymakers (or at least of policy pressures)? The preponderant answer to the variants of the separability question in recent riskresearch literature is no. Such denials of either the possibility or desirability of separation may be termed nonseparatist positions. What needs to be recognized, however, is that advocating a nonseparatist position masks radically different views about the nature of risk-assessment controversies and of how best to improve risk assessment. These nonseparatist views, I suggest, may be divided into two broad camps (although individuals in each camp differ in degree), which I label the sociological view and the metascientific view. The difference between the two may be found in what each finds to be problematic about any attempt to separate assessment and management. Whereas the former (sociological) view argues against separatist attempts on the grounds that they give too small a role to societal (and other nonscientific) values, the latter (metascientific) view does so on the grounds that they give too small a role to scientific and methodological understanding. Examples of those I place under the sociological view are the cultural reductionists discussed in the preceding chapter by Shrader-Frechette. Examples of those I place under the metascientific view are the contributors to this volume themselves. A major theme running through this volume is that risk assessment cannot and should not be separated from societal and policy values (e.g., Silbergeld's uneasy divorce).



Author(s):  
Kenneth F. Schaffner

In this chapter I shall examine the relations between what appear to be two somewhat different concepts of causation that are widely employed in the biomedical sciences. The first type is what I term epidemiological causation. It is characteristically statistical and uses expressions like "increased risk" and "risk factor." The second concept is more like the form of causation we find in both the physical sciences and everyday life, as in expressions such as "the increase in temperature caused the mercury in the thermometer to expand" or "the sonic boom caused my window to break." In the physical and the biological sciences, such claims are typically further analyzed and explained in terms of underlying mechanisms. For example, accounts in the medical literature of cardiovascular diseases associated with the ischemic myocardium typically distinguish between the risk factors and the mechanisms for these disorders (Willerson 1982). Interestingly, both concepts of causation have found their way into the legal arena, the first or epidemiological concept only relatively recently in both case law and federal agency regulatory restrictions. The second, perhaps more typical, notion of causation has turned out to be not so simple on deeper analysis and led Hart and Honoré, among others, to subject the notion to extensive study in their classic book Causation in the Law. In another paper (Schaffner 1987), I examined some of these issues, in particular the epidemiological concept of causation as it might apply to recent DES cases such as Sinddl and Collins. Reflections on the Sindell case and on one of its legal precedents, the Summers v. Tice case, led Judith Jarvis Thomson to introduce a distinction between two types of evidence that might be adduced to support a claim that an agent caused harm to a person. The two types of evidence parallel the distinction between these two concepts of causation, and 1 shall introduce them by means of a particularly striking example originally credited to David Kaye (Kaye 1982).



Author(s):  
Ronald N. Giere

Before World War II, most decisions involving the introduction of new technologies were made primarily by individuals or corporations, with only minimal interference from government, usually in the form of regulations. Since the war, however, the increased complexity of modern technologies and their impact on society as a whole have tended to force the focus of decision making toward the federal government, although this power is still usually exercised in the form of regulation rather than outright control. Given the huge social consequences of many such decisions, it seems proper that the decision-making process be moved further into the public arena. Yet one may wonder whether the society has the resources and mechanisms for dealing with these issues. Thus, the nature of such controversies, and the possible means for their resolution, has itself become an object of intense interest. One may approach this subject from at least as many directions as there are academic specialties. Many approaches are primarily empirical in that they attempt to determine the social and political mechanisms that are currently operative in the generation and resolution of controversies over new technologies (Nelkin 1979). Such studies usually do not attempt to determine whether the social mechanisms actually operating are effective mechanisms in the sense that they tend to produce decisions that in fact result in the originally desired out comes. The approach of this chapter is much more theoretical. It begins with a standard model of decision making and then analyzes the nature of technological decisions in terms of the postulated model. The advantage of such an approach is that it provides a clear and simple framework for both analyzing a controversy and judging its outcome. The disadvantage is that it tells us little about the actual social and political processes in the decision. Eventually we would like an account that incorporates both theoretical and empirical viewpoints. Regarding the proposed model, there are several ingredients in any decision. This chapter concentrates on one of these ingredients: scientific knowledge, particularly statistical knowledge of the type associated with studies of low-level environmental hazards. There is no presumption, however, that statistical knowledge, or scientific knowledge generally, is the most important ingredient in any decision.



Author(s):  
Rachelle D. Hollander

Concern for relationships among ethics, values, policy, and science and engineering is prominent in modern society. The existence of a program called Ethics and Values Studies in an agency of the U.S. government, the National Science Foundation, provides some evidence of this (Hollander 1987a, 1987b; Hollander and Steneck 1990). The bills introduced in the U.S. Congress to support bio(medical) ethics centers through the National Institutes of Health also provide evidence (U.S. Senate 1988). New initiatives support research and related activities in areas of biomedical ethics in the National Center for Nursing Research and the Office of Human Genome Research in the National Institutes of Health. In July 1988, the Board of Radioactive Waste Management of the National Research Council devoted one day of a four-day retreat to considering the ethical and value aspects of that issue (BRWM 1988). In this chapter I shall attempt to show why such issues occupy particular attention now. My thesis is that a new acknowledgment of our collective moral responsibility is needed because of the political and social context in which science now operates. This context requires more sophisticated scientific and ethical analysis, as well as scientists, engineers, policymakers, interested scholars, and others working together to determine not just acceptable risk but also acceptable evidence. To provide perspective on these matters, we should note that interactions of science, technology, and society have raised these kinds of problems for a long time. A play by Henrik Ibsen, An Enemy of the People, written in 1882, raises all these concerns. An Enemy of the People is a story about the possibility of contamination in the water supply that feeds a town's new mineral baths. The baths attract the summer visitors that have rejuvenated the community. A Dr. Thomas Stockmann has investigated and discovered the problem; he has documented it, and he is delighted to have made the discovery. He, after all, had warned the town fathers about the problem when they designed the water supply, and they did not listen. Now he presents the truth as he sees it—and he sees it in the worst possible light—to his brother Peter, the mayor, who had organized the efforts to construct the baths.



Author(s):  
E. William Colglazier

A sustained and definitive radioactive waste management policy has been a elusive goal for our nation since the beginning of the nuclear age. An atmosphere of contentiousness and mistrust among the interested parties, fed by a long history of policy reversals, delays, false starts, legal and jurisdictional wrangles, and scientific overconfidence and played out against the background of public concern with nuclear power and weapons issues generally, has dogged society's attempts to come to grips with the radioactive waste-management issue. The policy conflicts have become so intense and intractable that Congress has been forced to deal with the issue periodically. The year 1982 was one watershed year for congressional action on high-level nuclear waste, and 1987 proved to be another. This chapter will examine ethical and value issues in radioactive waste management (RWM), with a special emphasis on disputes about scientific evidence. Controversies over evidence have been particularly important because of the many scientific uncertainties and problems inherent in trying to ensure that nuclear waste in a geological repository will harm neither people nor the environment for the thousands of years that the waste will remain hazardous. This requirement of guaranteeing adequate safety over millennia is an unprecedented undertaking for our regulatory and scientific institutions. The first section of the chapter will provide a brief historical overview of the national policy disputes in radioactive waste management, and the second section will discuss some of the key value issues that have been at the heart of the controversies. Our approach is to delineate key policy issues and to separate the value components of each into three categories: procedural, distributional, and evidential. Key stakeholders—Congress, federal agencies, the nuclear industry, utilities, environmental groups, state governments, Native American tribes, local communities—take particular policy positions justified in part on the basis of procedural, distributional, and evidential values. Procedural values refer to who should make what decision for whom and by what process. Distributional values concern what is a fair allocation of costs, benefits, and risks to the affected parties and to society as a whole. Evidential values refer to what counts as evidence, for example, what type and degree of scientific evidence is sufficient and admissible in making a particular societal decision, especially in the face of large scientific unknowns and significant social and scientific debate. Categories of "value concerns" thus include fairness and appropriateness of process, outcomes, and evidence.



Author(s):  
Valerie Mike

The case of Linda Loerch and her son Peter presented to the Minnesota Supreme Court raises the question of whether legal liability can extend beyond the second generation. During the pregnancy leading to Linda's birth, her mother had taken the synthetic hormone diethylstilbestrol, commonly known as DES. Linda herself has a deformed uterus, and her son Peter, born twelve weeks prematurely, is a quadriplegic afflicted with cerebral palsy. The family is seeking damages for the child's condition from Abbott Laboratories, the manufacturer of the drug taken forty years earlier by his grandmother (MacNeil/Lehrer 1988). The claims of this lawsuit hinge on the evidence available when the drug was prescribed. The case illustrates, with some new ramifications, the interrelated issues of ethics and evidence surrounding the practice of medicine, a major theme of this chapter. The DES story first became national news at a time that marked the rise of the new field of bioethics. The Food and Drug Administration (FDA) issued a drug alert in 1971 to all physicians concerning the use of DES by pregnant women, as an association had been found between the occurrence of a rare form of cancer of the vagina in young women and their mothers' exposure to DES. The drug had been prescribed widely since the 1940s for a variety of medical conditions, including the prevention of miscarriages. It is estimated that during this period four to six million individuals, mothers and their offspring, were exposed to DES during the mothers' pregnancy. The full dimensions of the medical disaster, the subject of continued controversy, have yet to be firmly established. DES daughters are at risk of developing clear cell adenocarcinoma of the vagina, the risk estimated to be one per one thousand by age twenty-four. Ninety percent have a benign vaginal condition called adenosis, and many have other genital abnormalities. They are at higher risk of pregnancy loss and infertility. DES mothers also may be at a higher risk for breast and gynecological cancers, and DES sons may be at an increased risk of genitourinary abnormalities, infertility, and testicular cancer. DES may, as well, have affected fetal brain development, leading to behavioral problems and learning disabilities.



Author(s):  
Roger E. Kasperson ◽  
Jeanne X. Kasperson

In this last decade of the twentieth century, hazards have become a part of everyday life as they have never been before. It is not that life, at least in advanced industrial societies, is more dangerous. Indeed, by any measure, the average person is safer and is likely to live longer and with greater leisure and well-being than at earlier times. Nevertheless, the external world seems replete with toxic wastes, building collapses, industrial accidents, groundwater contamination, and airplane crashes and near collisions. The newspapers and television news daily depict specific hazard events, and a parade of newly discovered or newly assessed threats—the "hazard-of-the-week" syndrome—occupies the attention of a host of congressional committees, federal regulatory agencies, and state and local governments. Seemingly any potential threat, however esoteric or remote, has its day in the sun. How is it, then, that certain hazards pass unnoticed or unattended, growing in size until they have taken a serious toll? How is it that asbestos pervaded the American workplace and schools when its respiratory dangers had been known for decades? How is it that after years of worry about nuclear war, the threat of a "nuclear winter" did not become apparent until the 1980s? How is it that the Sahel famine of 1983 to 1984 passed unnoticed in the hazard-filled newspapers of the world press, until we could no longer ignore the specter of millions starving? How is it that America "rediscovered" poverty only with Michael Harrington's vivid account of the "other Americans" and acknowledged the accumulating hazards of chemical pesticides only with Rachel Carson's Silent Spring1? How is it that during this century a society with a Delaney amendment and a $10 billion Superfund program has allowed smoking to become the killer of millions of Americans? And why is it that the potential long-term ecological catastrophes associated with burning coal command so much less concern than do the hazards of nuclear power? These oversights or neglects, it might be argued, are simply the random hazards or events that elude our alerting and monitoring systems. After all, each society has its "worry beads," particular hazards that we choose to rub and polish assiduously (Kates 1985).



Author(s):  
Kristin Shrader-Frechette

Many Americans, sensitized by the media to the dangers of cigarette smoking, have been appalled to discover on their visits to the Far East that most adult Chinese smoke. The Chinese, on the other hand, consume little alcohol and have expressed bewilderment about the hazardous and excessive drinking in the West. Differences in risk acceptance, however, are not merely cross cultural. Within a given country, some persons are scuba divers, hang gliders, or motorcyclists, and some are not; there are obvious discrepancies in attitudes toward individual risk. At the level of societal risk—for example, from nuclear power, toxic dumps, and liquefied natural gas facilities—different persons also exhibit analogous disparities in their hazard evaluations. In this chapter I shall argue that two of the major accounts of societal risk acceptance are highly questionable. Both err because of fundamental flaws in their conception of knowledge. This means that to understand the contemporary controversy over societal risk, we need to accomplish a philosophical task, that is, to uncover the epistemologies assumed by various participants in the conflict. Proponents of both positions err, in part, because they are reductionistic and because they view as irrational the judgments of citizens who are risk averse. After showing why both views are built on highly doubtful philosophical presuppositions, I shall argue in favor of a middle position that I call scientific proceduralism. An outgrowth of Karl Popper's views, this account is based on the notion that objectivity in hazard assessment requires that risk judgments be able to withstand criticism by scientists and lay people affected by the risks. Hence the position is sympathetic to many populist attitudes toward involuntary and public hazards. Although scientific proceduralism is not the only reasonable view that one might take regarding risk, I argue that it is at least a rational one. And if so, then rational behavior should not be defined purely in terms of the assessments of either the cultural relativists or the naive positivists. Most importantly, risk experts should not "write off" the common person. Because hazard assessment is dominated by these two questionable positions, it is reasonable to ask whether criticizing them threatens the value of quantified risk analysis (QRA). Indeed, many of those allegedly speaking "for the people," as I claim to be doing, are opposed to scientific and analytic methods of assessing environmental dangers.



Author(s):  
Ellen K. Silbergeld

Over the past decade, the concept of risk has become central to environmental policy. Environmental decision making has been recast as reducing risk by assessing and managing it. Risk assessment is increasingly employed in environmental policymaking to set standards and initiate regulatory consideration and, even in epidemiology, to predict the health effects of environmental exposures. As such, it standardizes the methods of evaluation used in dealing with environmental hazards. Nonetheless, risk assessment remains controversial among scientists, and the policy results of risk assessment are generally not accepted by the public. It is not my purpose to examine the origin of these controversies, which I and others have considered elsewhere (see, e.g., EPA 1987), but rather to consider some of the consequences of the recent formulation of risk assessment as specific decisions and authorities distinguishable from other parts of environmental decision making. The focus of this chapter is the relatively new policy of separating certain aspects of risk assessment from risk management, a category that includes most decision-making actions. Proponents of this structural divorce contend that risk assessment is value neutral, a field of objective scientific analysis, while risk management is the arena where these objective data are processed into appropriate social policy. This raises relatively new problems to complicate the already contentious arena of environmental policy. This separation has created problems that interfere with the recognition and resolution of both scientific and transscientific issues in environmental policymaking. Indeed, both science and policy could be better served by recognizing the scientific limits of risk-assessment methods and allowing scientific and policy judgment to interact to resolve unavoidable uncertainties in the decisionmaking process. This chapter will discuss the forces that encouraged separating the performance of assessment and management at the EPA in the 1980s, which I characterize as an uneasy divorce. I shall examine some scientific and policy issues, especially regarding uncertainty, that have been aggravated by this policy of deliberate separation. Various interpretations of uncertainty have become central, and value-laden issues in decision making and appeals to uncertainty have often been an excuse for inaction.



Author(s):  
Vincent T. Covello ◽  
Peter M. Sandman

The Emergency Planning and Community Right-to-Know Act of 1986 (Title III of the Superfund amendments) and many state and local laws are imposing much more openness on the chemical industry. During the next few years, industry officials, government officials, and representatives from public-interest groups will increasingly be called upon to provide and explain information about chemical risks to the general public and to people living near chemical plants. This chapter presents and discusses guidelines for communicating informatio about chemical risks effectively, responsibly and ethically. A basic assumption of the chapter is that discussing risk, when done properly, is always better than withholding information. In the long run, more effective, responsible, and ethical risk communication will be better for communities, industry, government, and society as a whole. The chapter consists of four parts: (1) guidelines for communicating risk information, (2) guidelines for presenting and explaining risk-related numbers and statistics, (3) guidelines for presenting and explaining risk comparisons, and (4) problems frequently encountered in communicating risk information. Most of the material in this chapter deals with health risks, not the risks of accidents. In some cases, accidents raise similar communication issues, especially when most of the expected adverse health effects are long term rather than immediate. However, when considering the risks of accidents, it is generally best to focus on preventive measures, emergency response procedures, containment and remediation procedures, and the extent of the possible damage. There are no easy prescriptions for communicating risk information effectively, responsibly, and ethically (Table 4.1). But those who have studied and participated in debates about risk do generally agree on seven principles that underlie effective risk communication (Covello and Allen 1988). These principles apply equally well to both the public and the private sectors; (Covello and Allen 1988; Covello, McCallum, and Pavlova 1989; Covello, Sandman, Slovic 1987; Covello, von Winterfeldt, and Slovic 1989; Hance, Chess, and Sandman 1987; Krimsky and Plough 1988). Although many of these principles may seem obvious, they are continually and consistently violated. Thus a useful way to read them is try to understand why they are frequently not followed.



Sign in / Sign up

Export Citation Format

Share Document