scholarly journals APPLICATION OF STATISTICAL METHODS IN WEATHER PREDICTION

1955 ◽  
Vol 41 (11) ◽  
pp. 806-815 ◽  
Author(s):  
T. F. Malone
Author(s):  
Rafael Alberdi ◽  
Elvira Fernandez ◽  
Igor Albizu ◽  
Victor Valverde ◽  
Miren T. Bedialauneta ◽  
...  

2009 ◽  
Vol 137 (12) ◽  
pp. 4355-4368 ◽  
Author(s):  
Andrew E. Mercer ◽  
Chad M. Shafer ◽  
Charles A. Doswell ◽  
Lance M. Leslie ◽  
Michael B. Richman

Abstract Tornadoes often strike as isolated events, but many occur as part of a major outbreak of tornadoes. Nontornadic outbreaks of severe convective storms are more common across the United States but pose different threats than do those associated with a tornado outbreak. The main goal of this work is to distinguish between significant instances of these outbreak types objectively by using statistical modeling techniques on numerical weather prediction output initialized with synoptic-scale data. The synoptic-scale structure contains information that can be utilized to discriminate between the two types of severe weather outbreaks through statistical methods. The Weather Research and Forecast model (WRF) is initialized with synoptic-scale input data (the NCEP–NCAR reanalysis dataset) on a set of 50 significant tornado outbreaks and 50 nontornadic severe weather outbreaks. Output from the WRF at 18-km grid spacing is used in the objective classification. Individual severe weather parameters forecast by the model near the time of the outbreak are analyzed from simulations initialized at 24, 48, and 72 h prior to the outbreak. An initial candidate set of 15 variables expected to be related to severe storms is reduced to a set of 6 or 7, depending on lead time, that possess the greatest classification capability through permutation testing. These variables serve as inputs into two statistical methods, support vector machines and logistic regression, to classify outbreak type. Each technique is assessed based on bootstrap confidence limits of contingency statistics. An additional backward selection of the reduced variable set is conducted to determine which variable combination provides the optimal contingency statistics. Results for the contingency statistics regarding the verification of discrimination capability are best at 24 h; at 48 h, modest degradation is present. By 72 h, the contingency statistics decline by up to 15%. Overall, results are encouraging, with probability of detection values often exceeding 0.8 and Heidke skill scores in excess of 0.7 at 24-h lead time.


2012 ◽  
Vol 14 (4) ◽  
pp. 1006-1023 ◽  
Author(s):  
Getnet Y. Muluye

There are several statistical downscaling methods available for generating local-scale meteorological variables from large-scale model outputs. There is still no universal single method, or group of methods, that is clearly superior, particularly for downscaling daily precipitation. This paper compares different statistical methods for downscaling daily precipitation from numerical weather prediction model output. Three different methods are considered: (i) hybrids; (ii) neural networks; and (iii) nearest neighbor-based approaches. These methods are implemented in the Saguenay watershed in northeastern Canada. Suites of standard diagnostic measures are computed to evaluate and inter-compare the performances of the downscaling models. Although results of the downscaling experiment show mixed performances, clear patterns emerge with respect to the reproduction of variation in daily precipitation and skill values. Artificial neural network-logistic regression (ANN-Logst), partial least squares (PLS) regression and recurrent multilayer perceptron (RMLP) models yield greater skill values, and conditional resampling method (SDSM) and K-nearest neighbor (KNN)-based models show the potential to capture the variability in daily precipitation.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


1973 ◽  
Vol 18 (11) ◽  
pp. 562-562
Author(s):  
B. J. WINER
Keyword(s):  

1996 ◽  
Vol 41 (12) ◽  
pp. 1224-1224
Author(s):  
Terri Gullickson
Keyword(s):  

1979 ◽  
Vol 24 (6) ◽  
pp. 536-536
Author(s):  
JOHN W. COTTON
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document