High-reliability organizing and communication during naturalistic decision making: U.S. National Weather Service (NWS) forecasting teams’ use of ‘floating’

Author(s):  
Arden C. Roeder ◽  
Ryan S. Bisel ◽  
William T. Howe
Author(s):  
Azadeh Assadi ◽  
Peter C. Laussen ◽  
Patricia Trbovich

Background and aims: Children with congenital heart disease (CHD) are at risk of deterioration in the face of common childhood illnesses, and their resuscitation and acute management is often best achieved with the guidance of CHD experts. Access to such expertise may be limited outside specialty heart centers and the fragility of these patients is cause for discomfort among many emergency medicine physicians. An understanding of the differences in macrocognition of these clinicians could shed light on some of the causes of discomfort and facilitate the development of a sociotechnological solution to this problem. Methods: Cardiac intensivists (CHD experts) and pediatric emergency medicine physicians (non-CHD experts) in a major academic cardiac center were interviewed using the critical decision method. Interview transcripts were coded deductively based on Klein’s macrocognitive framework and inductively to allow for new or modified characterization of dimensions. Results: While both CHD-experts and non-CHD experts relied on the macrocognitive functions of sensemaking, naturalistic decision making and detecting problems, the specific data and mental models used to understand the patients and course of therapy differed between CHD-experts and non-CHD experts. Conclusion: Characterization of differences between the macrocognitive processes of CHD experts and non-CHD experts can inform development of sociotechnological solutions to augment decision making pertaining to the acute management of pediatric CHD patients.


2015 ◽  
Vol 2 (2) ◽  
pp. 152-168 ◽  
Author(s):  
Stephen Harvey ◽  
John William Baird Lyle ◽  
Bob Muir

A defining element of coaching expertise is characterised by the coach’s ability to make decisions. Recent literature has explored the potential of Naturalistic Decision Making (NDM) as a useful framework for research into coaches’ in situ decision making behaviour. The purpose of this paper was to investigate whether the NDM paradigm offered a valid mechanism for exploring three high performance coaches’ decision-making behaviour in competition and training settings. The approach comprised three phases: 1) existing literature was synthesised to develop a conceptual framework of decision-making cues to guide and shape the exploration of empirical data; 2) data were generated from stimulated recall procedures to populate the framework; 3) existing theory was combined with empirical evidence to generate a set of concepts that offer explanations for the coaches’ decision-making behaviour. Findings revealed that NDM offered a suitable framework to apply to coaches’ decision-making behaviour. This behaviour was guided by the emergence of a slow, interactive script that evolves through a process of pattern recognition and/or problem framing. This revealed ‘key attractors’ that formed the initial catalyst and the potential necessity for the coach to make a decision through the breaching of a ‘threshold’. These were the critical factors for coaches’ interventions.


Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


Sign in / Sign up

Export Citation Format

Share Document