scholarly journals Preliminary results of ultrasonic snow depth sensor testing for National Weather Service (NWS) snow measurements in the US

2008 ◽  
Vol 22 (15) ◽  
pp. 2748-2757 ◽  
Author(s):  
Wendy A. Ryan ◽  
Nolan J. Doesken ◽  
Steven R. Fassnacht
2016 ◽  
Vol 44 (1) ◽  
pp. 7
Author(s):  
Pamela Campbell

Every day, millions of people access government websites. Over the thirty days preceding the date this article was written, the National Weather Service and the National Library of Medicine each received more than 50 million visits. According to the US General Services Administration, there are over 1,300 dot-gov domains in use by federal agencies. Given the immense resources involved in building and maintaining these websites, sound decisions about allocating these resources are important. And that means decision makers need good data.


2004 ◽  
Vol 291 (3-4) ◽  
pp. 297-318 ◽  
Author(s):  
Victor Koren ◽  
Seann Reed ◽  
Michael Smith ◽  
Ziya Zhang ◽  
Dong-Jun Seo

2021 ◽  
Vol 9 ◽  
Author(s):  
Dina Abdel-Fattah ◽  
Sarah Trainor ◽  
Eran Hood ◽  
Regine Hock ◽  
Christian Kienholz

Glacial lake outburst floods (GLOFs) significantly affect downstream communities in Alaska. Notably, GLOFs originating from Suicide Basin, adjacent to Mendenhall Glacier, have impacted populated areas in Juneau, Alaska since 2011. On the Kenai Peninsula, records of GLOFs from Snow Glacier date as far back as 1949, affecting downstream communities and infrastructure along the Kenai and Snow river systems. The US National Weather Service, US Geological Survey, and University of Alaska Southeast (for Suicide Basin) provide informational products to aid the public in monitoring both glacial dammed lakes as well as the ensuing GLOFs. This 2 year study (2018–2019) analyzed how communities affected by the aforementioned GLOFs utilize these various products. The participants in this project represented a variety of different sectors and backgrounds to capture a diverse set of perspectives and insights, including those of homeowners, emergency responders, tour operators, and staff at federal and state agencies. In addition, feedback and suggestions were collected from interviewees to facilitate improvements or modifications by the relevant entities to make the informational products more usable. Findings from this study were also used to inform changes to the US National Weather Service monitoring websites for both Suicide Basin and Snow Glacier. This paper’s findings on GLOF information use are relevant for other GLOF-affected communities, from both an information user and information developer perspective.


2006 ◽  
Vol 7 (1) ◽  
pp. 23-39
Author(s):  
BENJAMIN E. GOLDSMITH

Previous research (e.g., Horiuchi, Goldsmith, and Inoguchi, 2005) has shown some intriguing patterns of effects of several variables on international public opinion about US foreign policy. But results for the theoretically appealing effects of regime type and post-materialist values have been weak or inconsistent. This paper takes a closer look at the relationship between these two variables and international public opinion about US foreign policy. In particular, international reaction to the wars in Afghanistan (2001) and Iraq (2003) are examined using two major multinational surveys. The conclusions of previous research are largely reinforced: neither regime type nor post-materialist values appears to robustly influence global opinion on these events. Rather, some central interests, including levels of trade with the US and NATO membership, and key socialized factors, including a Muslim population, experience with terrorism, and the exceptional experiences of two states (Israel, Albania) emerge as the most important factors in the models. There is also a consistent backlash effect of security cooperation with the US outside of NATO. A discussion of these preliminary results points to their theoretical implications and their significance for further investigation into the transnational dynamics of public opinion and foreign policy.


Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


Sign in / Sign up

Export Citation Format

Share Document