We examine the extent to which perceptual decision-making processes differ as a function of the time in the academic term in which the participant enrolls in the experiment and whether the participant is an undergraduate who completes the experiment for course credit, a paid participant who completes the experiment in the lab, or a paid participant recruited via Amazon Mechanical Turk who completes the experiment online. In Study 1, we conducted a survey to examine cognitive psychologists' expectations regarding the quality of data obtained from these different groups of participants. We find that cognitive psychologists expect performance and response caution to be lowest among undergraduate participants who enroll at the end of the academic term, and highest among paid in-lab participants. Studies 2 and 3 tested these expectations using two common perceptual decision-making paradigms. Overall, we found little evidence for systematic time-of-term effects among undergraduate participants. The different participant groups responded to standard stimulus quality and speed/accuracy emphasis manipulations in similar ways. Among participants recruited via Mechanical Turk, the effect of speed/accuracy emphasis on response caution was strongest. This group also showed poorer discrimination performance than the other groups in a motion discrimination task, but not in a brightness discrimination task. We conclude that online crowdsourcing platforms can provide high quality perceptual decision-making data, but give recommendations for how data quality can be maximized when using these platforms for recruitment.