In many complex data-rich domains, safety is highly dependent on the timely and reliable detection and identification of alarms. However, due to the coupling and complexity of systems in these environments, large numbers of alarms can occur within a short period of time – a problem called an alarm flood (Perrow, 2011). Alarm floods have been defined as more than 10 alarms in a 10-minute period (EEMUA, 1999); however, this rate is often exceeded which can lead to operators missing or misinterpreting critical alarms and, as a result, system failures and accidents. Various types of masking effects may account for observed failures to detect and identify alarms during an alarm flood. Masking occurs when one stimulus is obscured by the presence of another stimulus that appears either simultaneously or in close temporal proximity (Enns & Di Lollo V, 2000). One example of masking is an attentional blink, where the second of two stimuli is missed when presented in close temporal proximity to a preceding stimulus (Raymond, Shapiro, & Arnell, 1992). To date, attentional blinks have been studied almost exclusively in the context of two target stimuli of very short duration (less than 100ms) and in simple single-task conditions. These experiments suggest that the phenomenon occurs when two stimuli are separated by 200-600ms. However, there is limited empirical evidence (e.g., Ferris et al., 2006) that, in more complex and demanding task environments, detection performance suffers even with a longer stimulus onset asynchrony (SOA). To better predict and prevent the occurrence of attentional blinks in alarm floods, the current study aimed to establish the SOA range that results in missed signals in the context of multiple visual and auditory alarms in a multi-task environment. The participants in this study were 26 students from the University of Michigan (age: 20-30 years old). The experiment was conducted using a simulation of an automated package delivery system. Participants were required to monitor the performance of eight delivery drones and perform two tasks: (1) search and confirm that a delivery pad was present before agreeing to package delivery; (2) detect and respond to visual alarms and auditory alarms associated with the various drones. Visual alarms took the form of a number presented in the center of the screen that identified the affected drone; auditory alarms used synthesized speech to present the drone number. Participants had to acknowledge the alarm as quickly as possible by pressing a button adjacent to the drone window. Both visual and auditory alarms lasted 200ms. Crossmodal matching was performed to ensure that the perceived intensity of signals in the two modalities was the same for each individual (see Pitts, Riggs, & Sarter, 2016). Alarms appeared either by themselves (single alarms) or in close temporal proximity of another alarm (alarm pairs). Each experiment scenario was 30 minutes long and included 40 single alarms and 40 alarm pairs. In addition, a 3-minute alarm flood was included in each scenario, consisting of 30 single alarms and 30 alarm pairs. The experiment employed a 5×4 full factorial design. The two independent variables, both varied within subjects, were SOA (200, 600, 800, 1000, 1200ms) and modality pairs (all four combinations of visual and auditory alarms). The dependent measures in this study were detection rate, accuracy of identification, and response time. The detection rate for visual alarms was lower when the alarm was the second in an alarm pair, compared to single visual alarms (89.9% vs. 93.9%; X2 (2, N = 22) = 6.874, p < .01). This effect was independent of the modality of the first alarm and strongest with an SOA of 1000ms. No difference was observed for the detection of single versus paired auditory alarms. Identification accuracy for visual alarms was also significantly lower when the alarm appeared second in a pair, compared to single visual alarms (86.0% vs. 94.0%; X2 (2, N = 22) = 6.007, p = .05). This effect was also independent of the modality of the first alarm, but found only with SOAs of 600, 1000, or 1200ms. Also, no significant difference in accuracy was found for single versus paired auditory alarms. Finally, response times were significantly faster during alarm floods, compared to single alarms or alarm pairs (2160ms vs. 2318ms; F (1, 21) = 6.284, p = .001). Response times to visual and auditory alarms did not differ significantly during alarm floods. In summary, in this experiment, alarm detection and identification suffered when a visual (but not an auditory) alarm was preceded by another visual or auditory alarm. This performance decrement was observed at longer SOAs than reported in earlier single-task studies. This finding may be explained, in part, by the competing visual (but not auditory) demands imposed by the required response to the alarms. Performance during alarm floods was comparable, and even improved in terms of response times, compared to single alarms and alarm pairs. This finding may be explained by the Yerkes-Dodson Law (1908) which describes that performance improves with physiological or mental arousal, up to a point, and then decreases again when arousal increases further. Another possible explanation is that participants invested more effort during alarm floods. The findings from this study add to the knowledge base in attention and alarm design. They highlight the importance of examining attentional phenomena in applied settings to be able to predict and counter performance breakdowns that may be experienced by operators engaged in multitasking in complex data-rich environments.