The “mass” replications of multiple studies, some employing dozens of investigators distributed among myriad sites, is unique to the reproducibility movement. The most impressive of these initiatives was employed by the Open Science Collaboration directed by Brian Nosek, who recruited 270 investigators to participate in the replication of 100 psychological experiments via a very carefully structured, prespecified protocol that avoided questionable research practices. Just before this Herculean effort, two huge biotech firms (Amegen and Bayer Health Care) respectively conducted 53 and 67 preclinical replications of promising published studies to ascertain which results were worth pursuing for commercial applications. Amazingly, in less than a 10-year period, a number of other diverse multistudy replications were also conducted involving hundreds of effects. Among these were the three “many lab” multistudy replications based on the Open Science Model (but also designed to ascertain if potential confounders of the approach itself existed, such as differences in participant types, settings, and timing), replications of social science studies published in Science and Nature, experimental economics studies, and even self-reported replications ascertained from a survey. Somewhat surprisingly, the overall successful replication percentage for this diverse collection of 811 studies was 46%, mirroring the modeling results discussed in Chapter 3 and supportive of John Ioannidis’s pejorative and often quoted conclusion that most scientific results are incorrect.