Sacred Bovines
Latest Publications


TOTAL DOCUMENTS

28
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780190490362, 9780197559659

Author(s):  
Douglas Allchin

DNA fingerprints are not prints of fingers. So why the name? The “fingerprint” label, of course, conveys far more than some pattern of swirls, whorls, and arches on the skin. As celebrated in detective lore, fingerprints are emblems of uniqueness. DNA can thus form a “fingerprint” by establishing personal identity. Genes are often characterized as “information.” Thus, DNA “codes for” an organism’s unique traits. In terms of uniqueness and developmental causality, then, genes seem to underlie human identity. Yet with deeper reflection, one might find this commonplace association spurious and misleading: another sacred bovine. Ironically, perhaps, DNA fingerprinting reveals very little about an individual’s DNA, or genome. The technique does not exhaustively profile every form of every gene, as many imagine. Nor does it even sequence the DNA. Rather, it focuses on a rather incidental feature of chromosome structure: differences in noncoding sections of DNA. There, short “nonsense” segments are repeated. The number of repeats, however, varies widely among individuals. Thus, they are convenient markers, or indicators, for identifying a particular organism—or a potential criminal suspect. Each person’s DNA may well be unique, but only a small and physiologically insignificant fragment of it is needed to identify a particular individual. Other biological features function as identifiers, as well. Forensic scientists have long relied on fingerprints and “mug shots,” both introduced into criminology by Charles Darwin’s cousin Francis Galton. They also use hair, skin tone, blood and tissue type, and voice sonograms. Some high-tech security systems—including ones adopted by US immigration—use eye scans. These record the unique pattern of the eye’s iris. (Blood vessel patterns on the retina work as well.) In all these cases, the aim is unambiguous identification. What matters is diagnostically unique properties. So these particular features are effective indicators. At the same time, their functional role is trivial. They are biologically insignificant. They hardly profile someone’s sense of self. Nor do they fully characterize who they are (personally, culturally, or even biologically). Identification and identity are distinct. A unique feature is not necessarily an important feature.


Author(s):  
Douglas Allchin

Consider the controversy, not long ago, over prostate cancer screening. A presidential task force scaled back recommended testing. But many doctors, citing important cases where screening detected cancer early, disagreed. Whose judgment should we trust? In New England, fish populations are threatened, according to experts. They suggest discontinuing cod fishing. But the fishermen report no decrease in their catches and defend their livelihood. Whose expertise should prevail: the scientists with their sampling and its inherent uncertainties, or the fishermen with their intimate local knowledge? There is a lot of alarm about global warming. But maybe it’s all “hot air.” Many political leaders cite scientific experts who say that the problem is overblown, and just politicized by biased environmental activists. Whose pronouncements should we heed? As illustrated in these cases, interpreting science in policy and personal decision-making poses important challenges. But being able to gather all the relevant evidence, gauge whether it is complete, and evaluate its quality is well beyond the average consumer of science. Inevitably, we all rely on scientific experts. The primary problem is not assessing the evidence, but knowing who to trust (essay 13). In standard lore, science educators are responsible for nurturing a sense of skepticism. We want to empower students to guard themselves against health scams, pseudoscientific nonsense, and unjustified reassurances about environmental or worker safety. But one may want to challenge this sacred bovine. Skepticism tends to erode belief. Blind doubt itself does not yield reliable knowledge. The aim, rather, as exemplified in the cases above, is to know where to place our trust. The problem of knowing who to trust is not new. In the late 1600s, Robert Boyle reflected on how to structure a scientific community, the emerging Royal Society of England. Investigators would need to share their findings. But reporting added a new layer between observations and knowledge. While ideally everyone might reproduce everyone else’s experiments, such redundancy wasted time and resources. Scientific knowledge would grow only if you could trust what others said. But what warranted such trust? Reliable testimony became a new problem.


Author(s):  
Douglas Allchin

It’s altogether too easy to reduce all method in science to a simple algorithm. Hypothesize, deduce (or predict), test, evaluate, conclude. It seems like a handy formula for authority. “The” Scientific Method (expressed in this way) haunts the introductions of textbooks, lab report guidelines, and science fair standards. Yet it is a poor model for learning about method in science. One might endorse instead teaching about the scientist’s toolbox. Science draws on a suite of methods, not just one. The methods also include model building, analogy, pattern recognition, induction, blind search and selection, raw data harvesting, computer simulation, experimental tinkering, chance, and (yes) play, among others. The toolbox concept remedies two major problems in the conventional view. First, it credits the substantial work—scientific work—in developing concepts or hypotheses. Science is creative. Even to pursue the popular strategy of falsification, one must first have imaginative conjectures. We need to foster such creative thinking skills among students. Second, the toolbox view supports many means for finding evidence—some direct, some indirect, some experimental, some observational, some statistical, some based on controls, some on similarity relationships, some on elaborate thought experiments, and so on. Again, students should be encouraged to think about evidence and argument broadly. Consider just a few historical examples. First, note Watson and Crick’s landmark model of DNA. It was just that: a model. They drew on data already available. They also played with cardboard templates of nucleotide bases. Yes, their hypothesis of semiconservative replication was eventually tested by Meselson and Stahl—later. But even that involved enormous experimental creativity (essay 4). Consider, too, Mendel’s discoveries in inheritance (essay 22). Mendel did not test just seven traits of pea plants, cleverly chosen in advance (as the story is often told). Rather, he seems to have followed twenty-two varieties exhibiting fifteen traits, hoping for patterns to emerge. He ultimately abandoned those varieties whose results he called confusing. Nobelist Thomas Hunt Morgan, in Mendel’s wake, did not discover sex linkage through any formal hypothesis about inheritance.


Author(s):  
Douglas Allchin

Amid the mantra-like rhetoric of the value of “hands- on” learning, the growth of computer “alternatives” to dissection in biology education is a striking anomaly. Instead of touching and experiencing real organisms, students now encounter life as virtual images. Hands-on, perhaps, but on a keyboard instead. Or on a computer mouse, not the living kind. This deep irony might prompt some to hastily redesign such alternatives. Or to find and adopt others. However, one could—far more deeply and profitably—view this as an occasion to reflect on the aims in teaching biology. What do computer programs and models teach? By not sacrificing any animal, one ostensibly expresses respect for life. Nothing seems more important—or moral—for a biology student to learn. Yet using this standard—respect for life—many alternatives to dissection seem deeply flawed. First, most alternatives share a fundamental destructive strategy of taking organisms apart. Each organ is removed and discarded in turn. That might seem to be the very nature of dissection. Yet some contend that “the best dissection is the one that makes the fewest cuts.” Here, the aim is discovery, not destruction. One tries to separate and clarify anatomical structures: trace pathways, find boundaries, encounter connections—quite impossible if things are precut and disappear as preformed units in a single mouse click. The “search and destroy” strategy, once common, is now justly condemned. Such dissections were never well justified. They reflect poor educational goals and fundamentally foster disrespect toward animals. Indeed, dissections may be opportunities to monitor and thus guide student attitudes. Search-and-destroy alternatives to dissection merely echo antiquated approaches. Better no dissections at all than such ill-conceived alternatives. Second, prepackaged images or take-apart models are not much better. They reduce the body to parts. No more than pieces in a mechanical clock. They neatly parcel the body into discrete units. However, a real body is messy. It is held together with all sorts of connective tissue.


Author(s):  
Douglas Allchin

In the sixteenth and seventeenth centuries, monsters were wonders (essay 1). Anomalous forms—like conjoined twins, hermaphrodites (essay 16), hydrocephalic babies, or the extraordinarily hairy Petrus Gonsalus and his equally hairy children—amazed people. They evoked a spirit of inquiry that helped fuel the emergence of modern science. Today, however, such bodies tend to strike us as freakish or grotesque—possibly even “against nature.” How did our cultural perspective, and with it, our values and emotional responses, change so radically? The shift in cultural views, ironically, paralleled deepening scientific understanding. Exceptions and anomalies can be powerful investigative tools. In this case, human monsters eventually prompted a new science, teratology, which compared normal and abnormal development. The scientific explanations and categories seemed to support value judgments. The history of monsters helps reveal the roots of a common belief (another sacred bovine): that the “normal” course of events reflects nature’s fundamental order. Well construed, monsters can help us rethink the meanings of normality and of the concept of laws of nature. Monsters are fascinating, of course, because they do not fit customary expectations. Such exceptions can be valuable opportunities for interpreting the unexceptional. One can begin to look for the relevant differences that reflect the underlying cause in both cases. It is a classic research strategy, especially in biology. Loss or modification of a structure can highlight its function. So, for example, vitamins were discovered through vitamin deficiency diseases, such as scurvy and beriberi. Likewise, the role of proteins in gene expression emerged from studying heritable enzyme deficiencies, such as alkaptonuria and phenylketonuria. Sickle cell anemia has become a classic example for learning in part because it was important historically in understanding hemoglobin and protein structure as well as the evolutionary consequences of the multiple effects of a single gene. Similarly, diabetes provides insight into the physiology of regulating blood glucose and the hormone insulin. Slips of the tongue are clues to how the brain processes language (missed notes in playing piano, too!).


Author(s):  
Douglas Allchin

Intersex individuals are coming out of the closet. Witness, for example, the 2003 Pulitzer Prize in Fiction for Jeffrey Eugenides’s Middlesex. The story follows someone with 5-alpha-reductase deficiency, or late-onset virilization. Imagine yourself raised as a girl, discovering at puberty (through cryptic, piecemeal clues) that you are male instead. Or male also? Or male only now? Or “just” newly virile? The condition confounds the conventionally strict dichotomy between male and female, masculine and feminine. It teases a culture preoccupied with gender. What are male and female, biologically? How does nature define the sexes, and sex itself? The questions seem simple enough. Seeking answers, however, may yield unexpected lessons—about the role of biological definitions; about assumptions concerning universals, rarities, and “normality”; and about the power of mistaken conceptions of nature to shape culture. Conceptualizing sex as male and female seems straightforward. In the standard version (familiar even to those unschooled in biology), females have two X chromosomes, while males have an X and a Y. They have different gametes: one moves, one stays stationary. These differences seem foundational. They seem to explain why male and female organisms have contrasting gonads, contrasting hormone-mediated physiologies, and contrasting secondary sex characteristics. Once-homologous organs follow divergent developmental trajectories. Perhaps even contrasting behaviors express the purported evolutionary imperative of each gamete: the “promiscuous,” uncaring male of cheap sperm, and the cunning, protective female of big-investment eggs. The apparent alignment of the two sexes through all levels of biological organization seems to validate this categorization as scientifically sound. Good biologists know better. First, sex may be determined in many ways. Birds use a “reversed,” WZ system, where females have the distinctive chromosome. Many insects have a haplodiploid system, where sex is determined by having a single or double set of all the chromosomes. Crocodiles and turtles develop their sex in response to temperature cues, not genes alone. The spoonworm Bonellia responds instead to whether females are absent or already present in the area.


Author(s):  
Douglas Allchin

Christiaan Eijkman shared a 1929 Nobel Prize “for his discovery of the antineuritic vitamin.” His extensive studies on chickens and prison inmates on the island of Java in the 1890s helped establish a white rice diet as a cause of beriberi, and the rice coating as a remedy. Eijkman reported that he had traced a bacterial disease, its toxin, and its antitoxin. Beriberi, however, is a nutrient deficiency. Eijkman was wrong. Ironically, Eijkman even rejected the current explanation when it was first introduced in 1910. Although he earned a Nobel Prize for his important contribution on the role of diet, Eijkman’s original conclusion about the bacterium was just plain mistaken. Eijkman’s error may seem amusing, puzzling, or even downright disturbing—an exception to conventional expectations. Isn’t the scientific method, properly applied, supposed to protect science from error? And who can better exemplify science than Nobel Prize winners? If not, how can we trust science? And who else is to serve as role models for students and aspiring scientists? Eijkman’s case, however, is not unusual. Nobel Prize–winning scientists have frequently erred. Here I profile a handful of such cases (Figure 11.1). Among them is one striking pair, Peter Mitchell and Paul Boyer, who advocated alternative theories of energetics in the cell. Each used his perspective to understand and correct an error of the other! Ultimately, all these cases offer an occasion to reconsider another sacred bovine—that science is (or should be) free of error, and that the measure of a good scientist is how closely he or she meets that ideal. Consider first Linus Pauling, the master protein chemist. Applying his intimate knowledge of bond angles, he deciphered the alpha-helix structure of proteins in 1950, which earned him a Nobel Prize in 1954. He also reasoned fruitfully about sickle cell hemoglobin, leading to molecular understanding of its altered protein structure. Yet Pauling also believed that megadoses of vitamin C could cure the common cold. Evidence continues to indicate otherwise, although Pauling’s legacy still seems to shape popular beliefs.


Author(s):  
Douglas Allchin

It is time to rescue Darwinism from the dismal shadow of Social Darwinism. According to this now widely discredited doctrine, human society is governed by “the survival of the fittest.” Competition reigns unchecked. Individualism erodes any effort to cooperate. Ethics and morality become irrelevant. Some contend that social competition is the very engine of human “progress,” and hence any effort to regulate it cannot be justified. Others accept competition as inevitable, even though they do not like it or endorse it. They seem persuaded that we cannot escape its presumed reality. Natural selection, many reason, is … well, “natural.” Natural, hence inevitable: what recourse could humans possibly have against the laws of nature? Thus even people from divergent backgrounds seem to agree that this view of society unavoidably follows from evolution. Creationists, not surprisingly, parade it as reason to reject Darwinism outright. By contrast, as resolute an evolutionist as Thomas Henry Huxley, “Darwin’s bulldog,” invoked similar implications even while he urged his audience to transcend them morally. Yet the core assumptions of so-called Social Darwinism are unwarranted. Why does it continue to haunt us? The time has come to dislodge this entrenched belief, this sacred bovine: that nature somehow dictates a fundamentally individualistic and competitive society. Unraveling the flawed argument behind Social Darwinism also yields a more general and much more important lesson about the nature of science. The historical argument seems to enlist science to portray certain cultural perspectives as “facts” of nature. Naturalizing cultural ideas in this way is all too easy. Cultural contexts seem to remain invisible to those within the culture itself, sometimes scientists too. The case of Social Darwinism—not Darwinism at all—illustrates vividly how appeals to science can go awry. We might thus learn how to notice, and to remedy or guard against, such errors in other cases. Ironically, the basic doctrine now labeled “Social Darwinism” did not originate with Darwin at all. Darwin was no Social Darwinist. Quite the contrary: Darwin opened the way for understanding how a moral society can evolve (essay 6).


Author(s):  
Douglas Allchin

Four-leaf clovers are traditional emblems of good luck. Two-headed sheep, five-legged frogs, or persons with six-fingered hands, by contrast, are more likely to be considered repugnant monsters, or “freaks of nature.” Such alienation was not always the case. In sixteenth-century Europe, such “monsters,” like the four-leaf clover today, mostly elicited wonder and respect. People were fascinated with natural phenomena just beyond the edge of the familiar. Indeed, that emotional response—at that juncture in history—helped foster the emergence of modern science. Wonder fostered investigation and, with it, deeper understanding of nature. One might thus well question a widespread but generally unchallenged belief about biology—what one might call a sacred bovine: that emotions can only contaminate science with subjective values. Indeed, delving into how “monsters” once evoked wonder might open a deeper appreciation of how science works today. Consider the case of Petrus Gonsalus, born in 1556 (Figure 1.1). As one might guess from his portrait, Gonsalus (also known as Gonzales or Gonsalvus) became renowned for his exceptional hairiness. He was a “monster”: someone—like dwarves, giants, or conjoined twins—with a body form conspicuously outside the ordinary. But, as his courtly robe might equally indicate, Gonsalus was also special. Gonsalus was born on Tenerife, a small island off the west coast of Africa. But he found a home in the court of King Henry II. Once there, he became educated. “Like a second mother France nourished me from boyhood to manhood,” he recollected, “and taught me to give up my wild manners, and the liberal arts, and to speak Latin.” Gonsalus’s journey from the periphery of civilization to a center of power occurred because he could evoke a sense of wonder. Eventually, he moved to other courts across Europe. Wonder was widely esteemed. For us, Gonsalus may be emblematic of an era when wonder flourished. In earlier centuries monsters were typically viewed as divine portents, or prodigies. Not that they were miracles. The course of nature seemed wide enough to include them.


Author(s):  
Douglas Allchin

GMOs. Genetically modified organisms. They conjure the specter of “Frankenfoods.” Monstrous creations reflecting human hubris. Violations of nature. And their very unnaturalness alone seems reason to reject the whole technology. But one may challenge this sacred bovine: the common image that GMOs cross some new threshold, dramatically changing how humans relate to nature. Or even that such a view can properly inform how we assess the value or risks of GMOs. Rather, biologically, GMOs are modest variants. As I will elaborate, “conventional” corn is probably more deeply shaped by human intervention than any addition of, say, a single Bt gene for a pesticide-resistant protein. Many crops promoted as “natural” alternatives are themselves dramatically modified genetically, like the cats and dogs we enjoy as pets. And this perspective—the context of GMOs—should inform views on policy. Without resolving the question of ultimate risks, we should at least recognize and dismiss as irrelevant the claim that GMOs are “unnatural.” While criticisms of GMOs vary, one recurrent theme is the assertion—or the implicit assumption—that they are inherently unnatural. For example, one high school student commented in a class discussion on genetically modified salmon, “Even though it definitely has many economic benefits, I think that shaping the way in which other organisms grow and live is not something that we as humans should be taking into our own hands.” As rendered recently for young readers, a cartoon princess of the Guardian Princess Alliance scolds a grower of GMOs: “These fruits and vegetables are not natural.” Many seem to believe that for humans to alter something living is to thereby taint it. Organisms should remain “pure.” Nature seems to exhibit its own self-justified purpose, not to be disrupted. What does this mean for all the other ways that humans modify organisms from their “natural” state? For example, we adorn our skin with tattoos and pierce various body parts. In certain cultures, at certain times, we have bound feet and elongated skulls.


Sign in / Sign up

Export Citation Format

Share Document