scholarly journals Sensitivity of Forecast Uncertainty to Different Microphysics Schemes within a Convection-Allowing Ensemble during SoWMEX-IOP8

Author(s):  
Chin-Hung Chen ◽  
Kao-Shen Chung ◽  
Shu-Chih Yang ◽  
Li-Hsin Chen ◽  
Pay-Liam Lin ◽  
...  

AbstractA mesoscale convective system that occurred in southwestern Taiwan on 15 June 2008 is simulated using convection-allowing ensemble forecasts to investigate the forecast uncertainty associated with four microphysics schemes—the Goddard Cumulus Ensemble (GCE), Morrison (MOR), WRF single-moment 6-class (WSM6), and WRF double-moment 6-class (WDM6) schemes. First, the essential features of the convective structure, hydrometeor distribution, and microphysical tendencies for the different microphysics schemes are presented through deterministic forecasts. Second, ensemble forecasts with the same initial conditions are employed to estimate the forecast uncertainty produced by the different ensembles with the fixed microphysics scheme. GCE has the largest spread in most state variables due to its most efficient phase conversion between water species. By contrast, MOR results in the least spread. WSM6 and WDM6 have similar vertical spread structures due to their similar ice-phase formulae. However, WDM6 produces more ensemble spread than WSM6 does below the melting layer, resulting from its double-moment treatment of warm rain processes. The model simulations with the four microphysics schemes demonstrate upscale error growth through spectrum analysis of the root-mean difference total energy (RMDTE). The RMDTE results reveal that the GCE and WDM6 schemes are more sensitive to initial condition uncertainty, whereas the MOR and WSM6 schemes are relatively less sensitive to that for this event. Overall, the diabatic heating–cooling processes connect the convective-scale cloud microphysical processes to the large-scale dynamical and thermodynamical fields, and they significantly affect the forecast error signatures in the multiscale weather system.

2019 ◽  
Vol 76 (9) ◽  
pp. 2653-2672 ◽  
Author(s):  
John R. Lawson

Abstract Thunderstorms are difficult to predict because of their small length scale and fast predictability destruction. A cell’s predictability is constrained by properties of the flow in which it is embedded (e.g., vertical wind shear), and associated instabilities (e.g., convective available potential energy). To assess how predictability of thunderstorms changes with environment, two groups of 780 idealized simulations (each using a different microphysics scheme) were performed over a range of buoyancy and shear profiles. Results were not sensitive to the scheme chosen. The gradient in diagnostics (updraft speed, storm speed, etc.) across shear–buoyancy phase space represents sensitivity to small changes in initial conditions: a proxy for inherent predictability. Storm evolution is split into two groups, separated by a U-shaped bifurcation in phase space, comprising 1) cells that continue strengthening after 1 h versus 2) those that weaken. Ensemble forecasts in regimes near this bifurcation are hence expected to have larger uncertainty, and adequate dispersion and reliability is essential. Predictability loss takes two forms: (i) chaotic error growth from the largest and most powerful storms, and (ii) tipping points at the U-shaped perimeter of the stronger storms. The former is associated with traditional forecast error between corresponding grid points, and is here counterintuitive; the latter is associated with object-based error, and matches the mental filtering performed by human forecasters for the convective scale.


2017 ◽  
Vol 32 (1) ◽  
pp. 149-164 ◽  
Author(s):  
Carlee F. Loeser ◽  
Michael A. Herrera ◽  
Istvan Szunyogh

Abstract This study investigates the efficiency of the major operational global ensemble forecast systems of the world in capturing the spatiotemporal evolution of the forecast uncertainty. Using data from 2015, it updates the results of an earlier study based on data from 2012. It also tests, for the first time on operational ensemble data, two quantitative relationships to aid in the interpretation of the raw ensemble forecasts. One of these relationships provides a flow-dependent prediction of the reliability of the ensemble in capturing the uncertain forecast features, while the other predicts the 95th percentile value of the magnitude of the forecast error. It is found that, except for the system of the Met Office, the main characteristics of the ensemble forecast systems have changed little between 2012 and 2015. The performance of the UKMO ensemble improved in predicting the overall magnitude of the uncertainty, but its ability to predict the dominant uncertain forecast features was degraded. A common serious limitation of the ensemble systems remains that they all have major difficulties with predicting the large-scale atmospheric flow in the long (longer than 10 days) forecast range. These difficulties are due to the inability of the ensemble members to maintain large-scale waves in the forecasts, which presents a stumbling block in the way of extending the skill of numerical weather forecasts to the subseasonal range. The two tested predictive relationships were found to provide highly accurate predictions of the flow-dependent reliability of the ensemble predictions and the 95th percentile value of the magnitude of the forecast error for the operational ensemble forecast systems.


2010 ◽  
Vol 67 (6) ◽  
pp. 1759-1778 ◽  
Author(s):  
Jason A. Sippel ◽  
Fuqing Zhang

Abstract This study uses ensemble Kalman filter analyses and short-range ensemble forecasts to study factors affecting the predictability of Hurricane Humberto, which made landfall along the Texas coast in 2007. Humberto is known for both its rapid intensification and extreme forecast uncertainty, which makes it an ideal case in which to examine the origins of tropical cyclone strength forecast error. Statistical correlation is used to determine why some ensemble members strengthen the incipient low into a hurricane and others do not. During the analysis period, it is found that variations in midlevel moisture, low-level convective instability, and strength of a front to the north of the cyclone likely lead to differences in net precipitation, which ultimately leads to storm strength spread. Stronger storms are favored when the atmosphere is more moist and unstable and when the front is weaker, possibly because some storms in the ensemble begin entraining cooler and drier postfrontal air during this period. Later during the free forecast, variable entrainment of postfrontal air becomes a leading cause of strength spread. Surface moisture differences are the primary contributor to intensity forecast differences, and convective instability differences play a secondary role. Eventually mature tropical cyclone dynamics and differences in landfall time result in very rapid growth of ensemble spread. These results are very similar to a previous study that investigated a 2004 Gulf of Mexico low with a different model and analysis technique, which gives confidence that they are relevant to tropical cyclone formation and intensification in general. Finally, the rapid increase in forecast uncertainty despite relatively modest differences in initial conditions highlights the need for ensembles and advanced data assimilation techniques.


2017 ◽  
Vol 145 (6) ◽  
pp. 2257-2279 ◽  
Author(s):  
Bryan J. Putnam ◽  
Ming Xue ◽  
Youngsun Jung ◽  
Nathan A. Snook ◽  
Guifu Zhang

Abstract Ensemble-based probabilistic forecasts are performed for a mesoscale convective system (MCS) that occurred over Oklahoma on 8–9 May 2007, initialized from ensemble Kalman filter analyses using multinetwork radar data and different microphysics schemes. Two experiments are conducted, using either a single-moment or double-moment microphysics scheme during the 1-h-long assimilation period and in subsequent 3-h ensemble forecasts. Qualitative and quantitative verifications are performed on the ensemble forecasts, including probabilistic skill scores. The predicted dual-polarization (dual-pol) radar variables and their probabilistic forecasts are also evaluated against available dual-pol radar observations, and discussed in relation to predicted microphysical states and structures. Evaluation of predicted reflectivity (Z) fields shows that the double-moment ensemble predicts the precipitation coverage of the leading convective line and stratiform precipitation regions of the MCS with higher probabilities throughout the forecast period compared to the single-moment ensemble. In terms of the simulated differential reflectivity (ZDR) and specific differential phase (KDP) fields, the double-moment ensemble compares more realistically to the observations and better distinguishes the stratiform and convective precipitation regions. The ZDR from individual ensemble members indicates better raindrop size sorting along the leading convective line in the double-moment ensemble. Various commonly used ensemble forecast verification methods are examined for the prediction of dual-pol variables. The results demonstrate the challenges associated with verifying predicted dual-pol fields that can vary significantly in value over small distances. Several microphysics biases are noted with the help of simulated dual-pol variables, such as substantial overprediction of KDP values in the single-moment ensemble.


2014 ◽  
Vol 142 (1) ◽  
pp. 141-162 ◽  
Author(s):  
Bryan J. Putnam ◽  
Ming Xue ◽  
Youngsun Jung ◽  
Nathan Snook ◽  
Guifu Zhang

Abstract Doppler radar data are assimilated with an ensemble Kalman Filter (EnKF) in combination with a double-moment (DM) microphysics scheme in order to improve the analysis and forecast of microphysical states and precipitation structures within a mesoscale convective system (MCS) that passed over western Oklahoma on 8–9 May 2007. Reflectivity and radial velocity data from five operational Weather Surveillance Radar-1988 Doppler (WSR-88D) S-band radars as well as four experimental Collaborative and Adaptive Sensing of the Atmosphere (CASA) X-band radars are assimilated over a 1-h period using either single-moment (SM) or DM microphysics schemes within the forecast ensemble. Three-hour deterministic forecasts are initialized from the final ensemble mean analyses using a SM or DM scheme, respectively. Polarimetric radar variables are simulated from the analyses and compared with polarimetric WSR-88D observations for verification. EnKF assimilation of radar data using a multimoment microphysics scheme for an MCS case has not previously been documented in the literature. The use of DM microphysics during data assimilation improves simulated polarimetric variables through differentiation of particle size distributions (PSDs) within the stratiform and convective regions. The DM forecast initiated from the DM analysis shows significant qualitative improvement over the assimilation and forecast using SM microphysics in terms of the location and structure of the MCS precipitation. Quantitative precipitation forecasting skills are also improved in the DM forecast. Better handling of the PSDs by the DM scheme is believed to be responsible for the improved prediction of the surface cold pool, a stronger leading convective line, and improved areal extent of stratiform precipitation.


2017 ◽  
Vol 145 (9) ◽  
pp. 3625-3646 ◽  
Author(s):  
Madalina Surcel ◽  
Isztar Zawadzki ◽  
M. K. Yau ◽  
Ming Xue ◽  
Fanyou Kong

This paper analyzes the scale and case dependence of the predictability of precipitation in the Storm-Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms (CAPS) during the NOAA Hazardous Weather Testbed Spring Experiments of 2008–13. The effect of different types of ensemble perturbation methodologies is quantified as a function of spatial scale. It is found that uncertainties in the large-scale initial and boundary conditions and in the model microphysical parameterization scheme can result in the loss of predictability at scales smaller than 200 km after 24 h. Also, these uncertainties account for most of the forecast error. Other types of ensemble perturbation methodologies were not found to be as important for the quantitative precipitation forecasts (QPFs). The case dependences of predictability and of the sensitivity to the ensemble perturbation methodology were also analyzed. Events were characterized in terms of the extent of the precipitation coverage and of the convective-adjustment time scale [Formula: see text], an indicator of whether convection is in equilibrium with the large-scale forcing. It was found that events characterized by widespread precipitation and small [Formula: see text] values (representative of quasi-equilibrium convection) were usually more predictable than nonequilibrium cases. No significant statistical relationship was found between the relative role of different perturbation methodologies and precipitation coverage or [Formula: see text].


2005 ◽  
Vol 62 (6) ◽  
pp. 1665-1677 ◽  
Author(s):  
H. Morrison ◽  
J. A. Curry ◽  
V. I. Khvorostyanov

Abstract A new double-moment bulk microphysics scheme predicting the number concentrations and mixing ratios of four hydrometeor species (droplets, cloud ice, rain, snow) is described. New physically based parameterizations are developed for simulating homogeneous and heterogeneous ice nucleation, droplet activation, and the spectral index (width) of the droplet size spectra. Two versions of the scheme are described: one for application in high-resolution cloud models and the other for simulating grid-scale cloudiness in larger-scale models. The versions differ in their treatment of the supersaturation field and droplet nucleation. For the high-resolution approach, droplet nucleation is calculated from Kohler theory applied to a distribution of aerosol that activates at a given supersaturation. The resolved supersaturation field and condensation/deposition rates are predicted using a semianalytic approximation to the three-phase (vapor, ice, liquid) supersaturation equation. For the large-scale version of the scheme, it is assumed that the supersaturation field is not resolved and thus droplet activation is parameterized as a function of the vertical velocity and diabatic cooling rate. The vertical velocity includes a subgrid component that is parameterized in terms of the eddy diffusivity and mixing length. Droplet condensation is calculated using a quasi-steady, saturation adjustment approach. Evaporation/deposition onto the other water species is given by nonsteady vapor diffusion allowing excess vapor density relative to ice saturation.


2014 ◽  
Vol 142 (6) ◽  
pp. 2198-2219 ◽  
Author(s):  
Jeffrey D. Duda ◽  
Xuguang Wang ◽  
Fanyou Kong ◽  
Ming Xue

Abstract Two approaches for accounting for errors in quantitative precipitation forecasts (QPFs) due to uncertainty in the microphysics (MP) parameterization in a convection-allowing ensemble are examined. They include mixed MP (MMP) composed mostly of double-moment schemes and perturbing parameters within the Weather Research and Forecasting single-moment 6-class microphysics scheme (WSM6) MP scheme (PPMP). Thirty-five cases of real-time storm-scale ensemble forecasts produced by the Center for Analysis and Prediction of Storms during the NOAA Hazardous Weather Testbed 2011 Spring Experiment were examined. The MMP ensemble had better fractions Brier scores (FBSs) for most lead times and thresholds, but the PPMP ensemble had better relative operating characteristic (ROC) scores for higher precipitation thresholds. The pooled ensemble formed by randomly drawing five members from the MMP and PPMP ensembles was no more skillful than the more accurate of the MMP and PPMP ensembles. Significant positive impact was found when the two were combined to form a larger ensemble. The QPF and the systematic behaviors of derived microphysical variables were also examined. The skill of the QPF among different members depended on the thresholds, verification metrics, and forecast lead times. The profiles of microphysics variables from the double-moment schemes contained more variation in the vertical than those from the single-moment members. Among the double-moment schemes, WDM6 produced the smallest raindrops and very large number concentrations. Among the PPMP members, the behaviors were found to be consistent with the prescribed intercept parameters. The perturbed intercept parameters used in the PPMP ensemble fell within the range of values retrieved from the double-moment schemes.


2008 ◽  
Vol 136 (3) ◽  
pp. 1054-1074 ◽  
Author(s):  
Tomislava Vukicevic ◽  
Isidora Jankov ◽  
John McGinley

Abstract In the current study, a technique that offers a way to evaluate ensemble forecast uncertainties produced either by initial conditions or different model versions, or both, is presented. The technique consists of first diagnosing the performance of the forecast ensemble and then optimizing the ensemble forecast using results of the diagnosis. The technique is based on the explicit evaluation of probabilities that are associated with the Gaussian stochastic representation of the weather analysis and forecast. It combines an ensemble technique for evaluating the analysis error covariance and the standard Monte Carlo approach for computing samples from a known Gaussian distribution. The technique was demonstrated in a tutorial manner on two relatively simple examples to illustrate the impact of ensemble characteristics including ensemble size, various observation strategies, and configurations including different model versions and varying initial conditions. In addition, the authors assessed improvements in the consensus forecasts gained by optimal weighting of the ensemble members based on time-varying, prior-probabilistic skill measures. The results with different observation configurations indicate that, as observations become denser, there is a need for larger-sized ensembles and/or more accuracy among individual members for the ensemble forecast to exhibit prediction skill. The main conclusions relative to ensembles built up with different physics configurations were, first, that almost all members typically exhibited some skill at some point in the model run, suggesting that all should be retained to acquire the best consensus forecast; and, second, that the normalized probability metric can be used to determine what sets of weights or physics configurations are performing best. A comparison of forecasts derived from a simple ensemble mean to forecasts from a mean developed from variably weighting the ensemble members based on prior performance by the probabilistic measure showed that the latter had substantially reduced mean absolute error. The study also indicates that a weighting scheme that utilized more prior cycles showed additional reduction in forecast error.


2009 ◽  
Vol 137 (10) ◽  
pp. 3388-3406 ◽  
Author(s):  
Ryan D. Torn ◽  
Gregory J. Hakim

Abstract An ensemble Kalman filter based on the Weather Research and Forecasting (WRF) model is used to generate ensemble analyses and forecasts for the extratropical transition (ET) events associated with Typhoons Tokage (2004) and Nabi (2005). Ensemble sensitivity analysis is then used to evaluate the relationship between forecast errors and initial condition errors at the onset of transition, and to objectively determine the observations having the largest impact on forecasts of these storms. Observations from rawinsondes, surface stations, aircraft, cloud winds, and cyclone best-track position are assimilated every 6 h for a period before, during, and after transition. Ensemble forecasts initialized at the onset of transition exhibit skill similar to the operational Global Forecast System (GFS) forecast and to a WRF forecast initialized from the GFS analysis. WRF ensemble forecasts of Tokage (Nabi) are characterized by relatively large (small) ensemble variance and greater (smaller) sensitivity to the initial conditions. In both cases, the 48-h forecast of cyclone minimum SLP and the RMS forecast error in SLP are most sensitive to the tropical cyclone position and to midlatitude troughs that interact with the tropical cyclone during ET. Diagnostic perturbations added to the initial conditions based on ensemble sensitivity reduce the error in the storm minimum SLP forecast by 50%. Observation impact calculations indicate that assimilating approximately 40 observations in regions of greatest initial condition sensitivity produces a large, statistically significant impact on the 48-h cyclone minimum SLP forecast. For the Tokage forecast, assimilating the single highest impact observation, an upper-tropospheric zonal wind observation from a Mongolian rawinsonde, yields 48-h forecast perturbations in excess of 10 hPa and 60 m in SLP and 500-hPa height, respectively.


Sign in / Sign up

Export Citation Format

Share Document