Improving Ensemble Weather Prediction System Initialization: Disentangling the Contributions from Model Systematic Errors and Initial Perturbation Size

2021 ◽  
Vol 149 (1) ◽  
pp. 77-90
Author(s):  
Thomas M. Hamill ◽  
Michael Scheuerer

AbstractCharacteristics of the European Centre for Medium-Range Weather Forecast’s (ECMWF’s) 0000 UTC diagnosed 2-m temperatures (T2m) from 4D-Var and global ensemble forecasts initial conditions were examined in 2018 over the contiguous United States at 1/2° grid spacing. These were compared against independently generated, upscaled high-resolution T2m analyses that were created with a somewhat novel data assimilation methodology, an extension of classical optimal interpolation (OI) to surface data analysis. The analysis used a high-resolution, spatially detailed climatological background and was statistically unbiased. Differences of the ECMWF 4D-Var T2m initial states from the upscaled OI reference were decomposed into a systematic component and a residual component. The systematic component was determined by applying a temporal smoothing to the time series of differences between the ECMWF T2m analyses and the OI analyses. Systematic errors at 0000 UTC were commonly 1 K or more and larger in the mountainous western United States, with the ECMWF analyses cooler than the reference. The residual error is regarded as random in character and should be statistically consistent with the spread of the ensemble of initial conditions after inclusion of OI analysis uncertainty. This analysis uncertainty was large in the western United States, complicating interpretation. There were some areas suggestive of an overspread initial ensemble, with others underspread. Assimilation of more observations in the reference OI analysis would reduce analysis uncertainty, facilitating more conclusive determination of initial-condition ensemble spread characteristics.

2007 ◽  
Vol 22 (6) ◽  
pp. 1304-1318 ◽  
Author(s):  
William Y. Y. Cheng ◽  
W. James Steenburgh

Abstract Despite improvements in numerical weather prediction, model errors, particularly near the surface, are unavoidable due to imperfect model physics, initial conditions, and boundary conditions. Here, three techniques for improving the accuracy of 2-m temperature, 2-m dewpoint, and 10-m wind forecasts by the Eta/North American Meso (NAM) Model are evaluated: (i) traditional model output statistics (ETAMOS), requiring a relatively long training period; (ii) the Kalman filter (ETAKF), requiring a relatively short initial training period (∼4–5 days); and (iii) 7-day running mean bias removal (ETA7DBR), requiring a 7-day training period. Forecasts based on the ETAKF and ETA7DBR methods were produced for more than 2000 MesoWest observing sites in the western United States. However, the evaluation presented in this study was based on subjective forecaster assessments and objective verification at 145 ETAMOS stations during summer 2004 and winter 2004/05. For the 145-site sample, ETAMOS produces the most accurate cumulative temperature, dewpoint, and wind speed and direction forecasts, followed by ETAKF and ETA7DBR, which have similar accuracy. Selected case studies illustrate that ETAMOS produces superior forecasts when model biases change dramatically, such as during large-scale pattern changes, but that ETAKF and ETA7DBR produce superior forecasts during quiescent cool season patterns when persistent valley and basin cold pools exist. During quiescent warm season patterns, the accuracy of all three methods is similar. Although the improved ETAKF cold pool forecasts are noteworthy, particularly since the Kalman filter can help better define cold pool structure by producing forecasts for locations without long-term records, alternative approaches are needed to improve forecasts during periods when model biases change dramatically.


2020 ◽  
Author(s):  
Sam Allen ◽  
Christopher Ferro ◽  
Frank Kwasniok

<p>A number of realizations of one or more numerical weather prediction (NWP) models, initialised at a variety of initial conditions, compose an ensemble forecast. These forecasts exhibit systematic errors and biases that can be corrected by statistical post-processing. Post-processing yields calibrated forecasts by analysing the statistical relationship between historical forecasts and their corresponding observations. This article aims to extend post processing methodology to incorporate atmospheric circulation. The circulation, or flow, is largely responsible for the weather that we experience and it is hypothesized here that relationships between the NWP model and the atmosphere depend upon the prevailing flow. Numerous studies have focussed on the tendency of this flow to reduce to a set of recognisable arrangements, known as regimes, which recur and persist at fixed geographical locations. This dynamical phenomenon allows the circulation to be categorized into a small number of regime states. In a highly idealized model of the atmosphere, the Lorenz ‘96 system, ensemble forecasts are subjected to well-known post-processing techniques conditional on the system's underlying regime. Two different variables, one of the state variables and one related to the energy of the system, are forecasted and considerable improvements in forecast skill upon standard post-processing are seen when the distribution of the predictand varies depending on the regime. Advantages of this approach and its inherent challenges are discussed, along with potential extensions for operational forecasters.</p>


2011 ◽  
Vol 26 (6) ◽  
pp. 785-807 ◽  
Author(s):  
Jonathan L. Case ◽  
Sujay V. Kumar ◽  
Jayanthi Srikishen ◽  
Gary J. Jedlovec

Abstract It is hypothesized that high-resolution, accurate representations of surface properties such as soil moisture and sea surface temperature are necessary to improve simulations of summertime pulse-type convective precipitation in high-resolution models. This paper presents model verification results of a case study period from June to August 2008 over the southeastern United States using the Weather Research and Forecasting numerical weather prediction model. Experimental simulations initialized with high-resolution land surface fields from the National Aeronautics and Space Administration’s (NASA) Land Information System (LIS) and sea surface temperatures (SSTs) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) are compared to a set of control simulations initialized with interpolated fields from the National Centers for Environmental Prediction’s (NCEP) 12-km North American Mesoscale model. The LIS land surface and MODIS SSTs provide a more detailed surface initialization at a resolution comparable to the 4-km model grid spacing. Soil moisture from the LIS spinup run is shown to respond better to the extreme rainfall of Tropical Storm Fay in August 2008 over the Florida peninsula. The LIS has slightly lower errors and higher anomaly correlations in the top soil layer but exhibits a stronger dry bias in the root zone. The model sensitivity to the alternative surface initial conditions is examined for a sample case, showing that the LIS–MODIS data substantially impact surface and boundary layer properties. The Developmental Testbed Center’s Meteorological Evaluation Tools package is employed to produce verification statistics, including traditional gridded precipitation verification and output statistics from the Method for Object-Based Diagnostic Evaluation (MODE) tool. The LIS–MODIS initialization is found to produce small improvements in the skill scores of 1-h accumulated precipitation during the forecast hours of the peak diurnal convective cycle. Because there is very little union in time and space between the forecast and observed precipitation systems, results from the MODE object verification are examined to relax the stringency of traditional gridpoint precipitation verification. The MODE results indicate that the LIS–MODIS-initialized model runs increase the 10 mm h−1 matched object areas (“hits”) while simultaneously decreasing the unmatched object areas (“misses” plus “false alarms”) during most of the peak convective forecast hours, with statistically significant improvements of up to 5%. Simulated 1-h precipitation objects in the LIS–MODIS runs more closely resemble the observed objects, particularly at higher accumulation thresholds. Despite the small improvements, however, the overall low verification scores indicate that much uncertainty still exists in simulating the processes responsible for airmass-type convective precipitation systems in convection-allowing models.


2010 ◽  
Vol 138 (7) ◽  
pp. 2930-2952 ◽  
Author(s):  
Andrea Alessandri ◽  
Andrea Borrelli ◽  
Simona Masina ◽  
Annalisa Cherchi ◽  
Silvio Gualdi ◽  
...  

Abstract The development of the Istituto Nazionale di Geofisica e Vulcanologia (INGV)–Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC) Seasonal Prediction System (SPS) is documented. In this SPS the ocean initial-conditions estimation includes a reduced-order optimal interpolation procedure for the assimilation of temperature and salinity profiles at the global scale. Nine-member ensemble forecasts have been produced for the period 1991–2003 for two starting dates per year in order to assess the impact of the subsurface assimilation in the ocean for initialization. Comparing the results with control simulations (i.e., without assimilation of subsurface profiles during ocean initialization), it is shown that the improved ocean initialization increases the skill in the prediction of tropical Pacific sea surface temperatures of the system for boreal winter forecasts. Considering the forecast of the 1997/98 El Niño, the data assimilation in the ocean initial conditions leads to a considerable improvement in the representation of its onset and development. The results presented in this paper indicate a better prediction of global-scale surface climate anomalies for the forecasts started in November, probably because of the improvement in the tropical Pacific. For boreal winter, significant increases in the capability of the system to discriminate above-normal and below-normal temperature anomalies are shown in both the tropics and extratropics.


2016 ◽  
Vol 144 (5) ◽  
pp. 1909-1921 ◽  
Author(s):  
Roman Schefzik

Contemporary weather forecasts are typically based on ensemble prediction systems, which consist of multiple runs of numerical weather prediction models that vary with respect to the initial conditions and/or the parameterization of the atmosphere. Ensemble forecasts are frequently biased and show dispersion errors and thus need to be statistically postprocessed. However, current postprocessing approaches are often univariate and apply to a single weather quantity at a single location and for a single prediction horizon only, thereby failing to account for potentially crucial dependence structures. Nonparametric multivariate postprocessing methods based on empirical copulas, such as ensemble copula coupling or the Schaake shuffle, can address this shortcoming. A specific implementation of the Schaake shuffle, called the SimSchaake approach, is introduced. The SimSchaake method aggregates univariately postprocessed ensemble forecasts using dependence patterns from past observations. Specifically, the observations are taken from historical dates at which the ensemble forecasts resembled the current ensemble prediction with respect to a specific similarity criterion. The SimSchaake ensemble outperforms all reference ensembles in an application to ensemble forecasts for 2-m temperature from the European Centre for Medium-Range Weather Forecasts.


2015 ◽  
Vol 143 (10) ◽  
pp. 4012-4037 ◽  
Author(s):  
Colin M. Zarzycki ◽  
Christiane Jablonowski

Abstract Tropical cyclone (TC) forecasts at 14-km horizontal resolution (0.125°) are completed using variable-resolution (V-R) grids within the Community Atmosphere Model (CAM). Forecasts are integrated twice daily from 1 August to 31 October for both 2012 and 2013, with a high-resolution nest centered over the North Atlantic and eastern Pacific Ocean basins. Using the CAM version 5 (CAM5) physical parameterization package, regional refinement is shown to significantly increase TC track forecast skill relative to unrefined grids (55 km, 0.5°). For typical TC forecast integration periods (approximately 1 week), V-R forecasts are able to nearly identically reproduce the flow field of a globally uniform high-resolution forecast. Simulated intensity is generally too strong for forecasts beyond 72 h. This intensity bias is robust regardless of whether the forecast is forced with observed or climatological sea surface temperatures and is not significantly mitigated in a suite of sensitivity simulations aimed at investigating the impact of model time step and CAM’s deep convection parameterization. Replacing components of the default physics with Cloud Layers Unified by Binormals (CLUBB) produces a statistically significant improvement in forecast intensity at longer lead times, although significant structural differences in forecasted TCs exist. CAM forecasts the recurvature of Hurricane Sandy into the northeastern United States 60 h earlier than the Global Forecast System (GFS) model using identical initial conditions, demonstrating the sensitivity of TC forecasts to model configuration. Computational costs associated with V-R simulations are dramatically decreased relative to globally uniform high-resolution simulations, demonstrating that variable-resolution techniques are a promising tool for future numerical weather prediction applications.


2016 ◽  
Vol 144 (10) ◽  
pp. 3799-3823 ◽  
Author(s):  
Glen S. Romine ◽  
Craig S. Schwartz ◽  
Ryan D. Torn ◽  
Morris L. Weisman

Over the central Great Plains, mid- to upper-tropospheric weather disturbances often modulate severe storm development. These disturbances frequently pass over the Intermountain West region of the United States during the early morning hours preceding severe weather events. This region has fewer in situ observations of the atmospheric state compared with most other areas of the United States, contributing toward greater uncertainty in forecast initial conditions. Assimilation of supplemental observations is hypothesized to reduce initial condition uncertainty and improve forecasts of high-impact weather. During the spring of 2013, the Mesoscale Predictability Experiment (MPEX) leveraged ensemble-based targeting methods to key in on regions where enhanced observations might reduce mesoscale forecast uncertainty. Observations were obtained with dropsondes released from the NSF/NCAR Gulfstream-V aircraft during the early morning hours preceding 15 severe weather events over areas upstream from anticipated convection. Retrospective data-denial experiments are conducted to evaluate the value of dropsonde observations in improving convection-permitting ensemble forecasts. Results show considerable variation in forecast performance from assimilating dropsonde observations, with a modest but statistically significant improvement, akin to prior targeted observation studies that focused on synoptic-scale prediction. The change in forecast skill with dropsonde information was not sensitive to the skill of the control forecast. Events with large positive impact sampled both the disturbance and adjacent flow, akin to results from past synoptic-scale targeting studies, suggesting that sampling both the disturbance and adjacent flow is necessary regardless of the horizontal scale of the feature of interest.


2018 ◽  
Vol 146 (5) ◽  
pp. 1601-1617 ◽  
Author(s):  
Shan Sun ◽  
Rainer Bleck ◽  
Stanley G. Benjamin ◽  
Benjamin W. Green ◽  
Georg A. Grell

Abstract The atmospheric hydrostatic Flow-Following Icosahedral Model (FIM), developed for medium-range weather prediction, provides a unique three-dimensional grid structure—a quasi-uniform icosahedral horizontal grid and an adaptive quasi-Lagrangian vertical coordinate. To extend the FIM framework to subseasonal time scales, an icosahedral-grid rendition of the Hybrid Coordinate Ocean Model (iHYCOM) was developed and coupled to FIM. By sharing a common horizontal mesh, air–sea fluxes between the two models are conserved locally and globally. Both models use similar adaptive hybrid vertical coordinates. Another unique aspect of the coupled model (referred to as FIM–iHYCOM) is the use of the Grell–Freitas scale-aware convective scheme in the atmosphere. A multiyear retrospective study is necessary to demonstrate the potential usefulness and allow for immediate bias correction of a subseasonal prediction model. In these two articles, results are shown based on a 16-yr period of hindcasts from FIM–iHYCOM, which has been providing real-time forecasts out to a lead time of 4 weeks for NOAA’s Subseasonal Experiment (SubX) starting July 2017. Part I provides an overview of FIM–iHYCOM and compares its systematic errors at subseasonal time scales to those of NOAA’s operational Climate Forecast System version 2 (CFSv2). Part II uses bias-corrected hindcasts to assess both deterministic and probabilistic subseasonal skill of FIM–iHYCOM. FIM–iHYCOM has smaller biases than CFSv2 for some fields (including precipitation) and comparable biases for other fields (including sea surface temperature). FIM–iHYCOM also has less drift in bias between weeks 1 and 4 than CFSv2. The unique grid structure and physics suite of FIM–iHYCOM is expected to add diversity to multimodel ensemble forecasts at subseasonal time scales in SubX.


Sign in / Sign up

Export Citation Format

Share Document