Stepping in the Same River Twice
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By Yale University Press

9780300209549, 9780300228038

Author(s):  
Ayelet Shavit

This epilogue provides a practical flowchart for interpreting the best practices for replication. Taking the specific actions shown in the flowchart will help researchers to bridge, albeit not completely and permanently close, the gaps inherent in replication. At each branch point, making the “wrong” decision—for example, ignoring (that is, not recording) or conflating (that is, not recording separately) the relevant details—closes the door to replication. Making the “right” decision, however, at best only clarifies and quantifies how much further away we remain from exact replication. Either way, the hubris implicit in any attempt to perfectly replicate a project is fated to fail.


Author(s):  
Morgan W. Tingley

Documenting long-term changes in biological systems requires empirical studies that span time frames from decades to centuries. Such time spans generally preclude planned experiments, but revisiting historical research programs or sites and repeating past methods or resurveying sites are being used to infer long-term changes. However, the unplanned nature of such resurveys, along with the uncontrolled environment, in which time becomes one of the treatments, results in imperfectly repeated samples. This chapter reviews inherent problems of resurveys and summarizes methods that help account for imprecision and biases in methods for the design of resurveys and analysis of the resulting data. These methods can also be used to compare repeated measurements taken over short time spans (e.g., days, months, years), although such replicates often minimize bias by having been designed when the first sample was collected. Without such careful planning, however, methodological bias increases with the time elapsed between samples.


Author(s):  
Emery R. Boose ◽  
Barbara S. Lerner

The metadata that describe how scientific data are created and analyzed are typically limited to a general description of data sources, software used, and statistical tests applied and are presented in narrative form in the methods section of a scientific paper or a data set description. Recognizing that such narratives are usually inadequate to support reproduction of the analysis of the original work, a growing number of journals now require that authors also publish their data. However, finer-scale metadata that describe exactly how individual items of data were created and transformed and the processes by which this was done are rarely provided, even though such metadata have great potential to improve data set reliability. This chapter focuses on the detailed process metadata, called “data provenance,” required to ensure reproducibility of analyses and reliable re-use of the data.


Author(s):  
Haim Goren

This chapter explores the importance of replication for a crucial historical turning point, when new and progressive scientific measurements of physical locations were being developed. Revisiting a location is of necessary and critical importance when replicating research in the lab or the field, but identifying a precise location can be surprisingly problematic. Geography includes the study and identification of where objects are located and how they are arranged in space. Whether identifying spreads of emergent diseases or distribution of genetically distinct populations, we use maps and topographic contours. The maps used today are the result of over a millennium of repeated field work, analysis, and interpretation that provides additional insight into the process of replication. In this chapter, this process of geographic replication and its criteria of success are illustrated with two examples: the repeated mapping of the city of Jerusalem and the attempt to measure accurately the elevation of the Dead Sea relative to sea level. These examples also reveal multiple motives for repeated exploration and study.


Author(s):  
Yemima Ben-Menahem

This chapter examines three stories by Jorge Luis Borges: “Funes: His Memory,” “Averroës's Search,” and “Pierre Menard, Author of the Quixote.” Each of these highlights the intricate nature of concepts and replication in the broad sense. The common theme running through these three stories is the word–world relation and the problems this relation generates. In each story, Borges explores one aspect of the process of conceptualization, an endeavor that has engaged philosophers ever since ancient Greece and is still at the center of contemporary philosophy of language and philosophy of mind. Together, Borges's stories present a complex picture of concepts and processes of conceptualization.


Author(s):  
Leonard Leibovici ◽  
Mical Paul

This chapter discusses systematic reviews (SRs) and meta-analysis (MA). SRs are reviews of the “best available,” reliable studies focused on a specific research question. Most often, the studies included in SRs are randomized controlled trials (RCTs) that have repeated the same treatments in (usually) different situations. MA is a statistical method applied to the results gleaned from an SR that yields a single measure of the expected outcomes of repeated trials, along with an assertion of the confidence we have in that measure. This chapter argues that RCTs are never similar enough to be considered identical replicates, but they are repeated studies, usually on different populations. Comparable RCTs examine one or similar outcomes (based on a hypothesized cause-and-effect relationship), which is why comparable RCTs can be included in SRs and MAs. If SRs and MAs show convincing results, further repeated RCTs would be avoided, thus saving valuable resources. However, evidence to date suggests that this rarely occurs.


Author(s):  
Tamar Dayan ◽  
Bella Galil

This chapter discusses the importance of museum specimens and samples. Natural history collections are archives of biodiversity, snapshots that provide a way to physically retrieve an individual specimen and through it track changes in populations and species across repeatable surveys in time and space. Growing international awareness of the potential effects on humanity due to the loss of biodiversity and the ensuing erosion of ecosystem services has reinforced the value of natural history collections, museums, and herbaria worldwide. The chapter summarizes the strengths and weaknesses of natural history collections for repeated surveys and other historical studies that require replication. Through a case study of the historical surveys and resurveys of the taxonomic exploration of the marine biota of the eastern Mediterranean Sea, it highlights the relevance of collections for ecology and conservation. Finally, it discusses prospects for future uses of natural history collections in the context of replicated research.


Author(s):  
Aaron M. Ellison

This chapter first discusses how scientists are skeptics and how they are always trying to disprove their pet hypotheses, both by making new observations and with experiments. It explains the importance of distinguishing between hypotheses, theories, and models, especially when discussing experiments and reproducibility. It then considers the importance of distinguishing an experiment with replicates from an experiment that has been replicated—i.e., repeated. It also addresses the question of how similar two repeated runs of a given experiment have to be for us to consider that the results of the initial study have been confirmed successfully. Answering this question requires some understanding of P values and statistical power.


Author(s):  
Aaron M. Ellison

This chapter suggests a series of best practices for creating replicable research that cross disciplinary boundaries. First, the observational or experimental design should account for variation within and among sites, whether in the field or in the laboratory. Second, the acceptable levels of Type I and Type II statistical errors should be set before the project begins. Third, investigators should carefully and quantitatively review the literature and/or do a pilot study to determine the expected effect size (e.g., the expected relationship(s) between two or more variables of interest or the difference in outcome between experimental treatments and controls) and its variance. Finally, there should be a clear assessment of resources available.


Author(s):  
Kristin Vanderbilt ◽  
David Blankman

Science has become a data-intensive enterprise. Data sets are commonly being stored in public data repositories and are thus available for others to use in new, often unexpected ways. Such re-use of data sets can take the form of reproducing the original analysis, analyzing the data in new ways, or combining multiple data sets into new data sets that are analyzed still further. A scientist who re-uses a data set collected by another must be able to assess its trustworthiness. This chapter reviews the types of errors that are found in metadata referring to data collected manually, data collected by instruments (sensors), and data recovered from specimens in museum collections. It also summarizes methods used to screen these types of data for errors. It stresses the importance of ensuring that metadata associated with a data set thoroughly document the error prevention, detection, and correction methods applied to the data set prior to publication.


Sign in / Sign up

Export Citation Format

Share Document