The availability of large eye-movement corpora has become increasingly important over the past years. In scene viewing, scan-path analyses of time-ordered fixations, for example, allow for investigating individual differences in spatial correlations between fixation locations, or for predicting individual viewing behavior in the context of computational models. However, time-dependent analyses require many fixations per scene, and only few large eye-movement corpora are publicly available. This manuscript presents a new corpus with eye-movement data from two hundred participants. Viewers memorized or searched either color or grayscale scenes while high or low spatial frequencies were filtered in central or peripheral vision. Our database provides the scenes from the experiment with corresponding object annotations, preprocessed eye-movement data, and heatmaps and fixation clusters based on empirical fixation locations. Besides time-dependent analyses, the corpus data allow for investigating questions that have received little attention in scene-viewing research so far: (i) eye-movement behavior under different task instructions, (ii) the importance of color and spatial frequencies when performing these tasks, and (iii) the individual roles and interaction of central and peripheral vision during scene viewing. Furthermore, the corpus allows for validation of computational models of attention and eye-movement control, and finally, analyses on an object- or cluster-based level.