scholarly journals Evaluation of Mei-yu Heavy-Rainfall Quantitative Precipitation Forecasts in Taiwan by A Cloud-Resolving Model for Three Seasons of 2012–2014

2021 ◽  
Author(s):  
Chung-Chieh Wang ◽  
Pi-Yu Chuang ◽  
Chih-Sheng Chang ◽  
Kazuhisa Tsuboki ◽  
Shin-Yi Huang ◽  
...  

Abstract. In this study, the performance of quantitative precipitation forecasts (QPFs) by the Cloud-Resolving Storm Simulator (CReSS) in real-time in Taiwan, at a horizontal grid spacing of 2.5 km and a domain size of 1500 x 1200 km2, within a range of 72 h during three mei-yu seasons of 2012–2014 is evaluated using categorical statistics, with an emphasis on heavy events (≥ 100 mm per 24 h). The overall threat scores (TSs) of QPFs for all events on day 1 (0–24 h) are 0.18, 0.15, and 0.09 at the threshold of 100, 250, and 500 mm, respectively, and indicate considerable improvements compared to past results and 5-km models. Moreover, the TSs are shown to be higher and the model more skillful in predicting larger events, in agreement with earlier findings for typhoons. After classification based on observed rainfall, the TSs of day-1 QPFs for the largest 4 % of events by CReSS at 100, 250, and 500 mm (per 24 h) are 0.34, 0.24, and 0.16, respectively, and can reach 0.15 at 250 mm on day 2 (24–48 h) and 130 mm on day 3 (48–72 h). The larger events also exhibit higher probability of detection and lower false alarm ratio than weaker events almost without exception across all thresholds. The strength of the model lies mainly in the topographic rainfall in Taiwan rather than migratory events that are less predictable. Our results highlight the crucial importance of cloud-resolving capability and the size of fine mesh for heavy-rainfall QPFs in Taiwan.

2022 ◽  
Vol 22 (1) ◽  
pp. 23-40
Author(s):  
Chung-Chieh Wang ◽  
Pi-Yu Chuang ◽  
Chih-Sheng Chang ◽  
Kazuhisa Tsuboki ◽  
Shin-Yi Huang ◽  
...  

Abstract. In this study, the performance of quantitative precipitation forecasts (QPFs) by the Cloud-Resolving Storm Simulator (CReSS) in Taiwan, at a horizontal grid spacing of 2.5 km and a domain size of 1500×1200 km2, in the range of 1–3 d during three Mei-yu seasons (May–June) of 2012–2014 is evaluated using categorical statistics, with an emphasis on heavy-rainfall events (≥100 mm per 24 h). The categorical statistics are chosen because the main hazards are landslides and floods in Taiwan, so predicting heavy rainfall at the correct location is important. The overall threat scores (TSs) of QPFs for all events on day 1 (0–24 h) are 0.18, 0.15, and 0.09 at thresholds of 100, 250, and 500 mm, respectively, and indicate considerable improvements at increased resolution compared to past results and 5 km models (TS < 0.1 at 100 mm and TS ≤ 0.02 at 250 mm). Moreover, the TSs are shown to be higher and the model more skillful in predicting larger events, in agreement with earlier findings for typhoons. After classification based on observed rainfall, the TSs of day − 1 QPFs for the largest 4 % of events by CReSS at 100, 250, and 500 mm (per 24 h) are 0.34, 0.24, and 0.16, respectively, and can reach 0.15 at 250 mm on day 2 (24–48 h) and 130 mm on day 3 (48–72 h). The larger events also exhibit higher probability of detection and lower false alarm ratio than smaller ones almost without exception across all thresholds. With the convection and terrain better resolved, the strength of the model is found to lie mainly in the topographic rainfall in Taiwan rather than migratory events that are more difficult to predict. Our results highlight the crucial importance of cloud-resolving capability and the size of fine mesh for heavy-rainfall QPFs in Taiwan.


2014 ◽  
Vol 71 (7) ◽  
pp. 2604-2624 ◽  
Author(s):  
Leah D. Grant ◽  
Susan C. van den Heever

Abstract The sensitivity of supercell morphology to the vertical distribution of moisture is investigated in this study using a cloud-resolving model with 300-m horizontal grid spacing. Simulated storms are found to transition from classic (CL) to low-precipitation (LP) supercells when the strength of elevated dry layers in the environmental moisture profile is increased. Resulting differences in the microphysical and dynamical characteristics of the CL and LPs are analyzed. The LPs produce approximately half of the accumulated surface precipitation as the CL supercell. The precipitating area in the LPs is spatially smaller and overall less intense, especially in the rear-flank downdraft region. The LPs have smaller deviant rightward storm motion compared to the CL supercell, and updrafts are narrower and more tilted, in agreement with observations. Lower relative humidities within the dry layers enhance evaporation and erode the upshear cloud edge in the LPs. This combination favors a downshear distribution of hydrometeors. As a result, hail grows preferentially along the northeastern side of the updraft in the LPs as hail embryos are advected cyclonically around the mesocyclone, whereas the primary midlevel hail growth mechanism in the CL supercell follows the classic Browning and Foote model. The differing dominant hail growth mechanisms can explain the variations in surface precipitation distribution between CLs and LPs. While large changes in the microphysical structure are seen, similarities in the structure and strength of the updraft and vorticity indicate that LP and CL supercells are not dynamically distinct storm types.


2020 ◽  
Vol 148 (8) ◽  
pp. 3379-3396
Author(s):  
Xiaoshi Qiao ◽  
Shizhang Wang ◽  
Craig S. Schwartz ◽  
Zhiquan Liu ◽  
Jinzhong Min

Abstract A probability matching (PM) product using the ensemble maximum (EnMax) as the basis for spatial reassignment was developed. This PM product was called the PM max and its localized version was called the local PM (LPM) max. Both products were generated from a 10-member ensemble with 3-km horizontal grid spacing and evaluated over 364 36-h forecasts in terms of the fractions skill score. Performances of the PM max and LPM max were compared to those of the traditional PM mean and LPM mean, which both used the ensemble mean (EnMean) as the basis for spatial reassignment. Compared to observations, the PM max typically outperformed the PM mean for precipitation rates ≥5 mm h−1; this improvement was related to the EnMax, which had better spatial placement than the EnMean for heavy precipitation. However, the PM mean produced better forecasts than the PM max for lighter precipitation. It appears that the global reassignment used to produce the PM max was responsible for its poorer performance relative to the PM mean at light precipitation rates, as the LPM max was more skillful than the LPM mean at all thresholds. These results suggest promise for PM products based on the EnMax, especially for rare events and ensembles with insufficient spread.


2010 ◽  
Vol 138 (3) ◽  
pp. 688-704 ◽  
Author(s):  
Megan S. Gentry ◽  
Gary M. Lackmann

Abstract The Weather Research and Forecasting (WRF) model is used to test the sensitivity of simulations of Hurricane Ivan (2004) to changes in horizontal grid spacing for grid lengths from 8 to 1 km. As resolution is increased, minimum central pressure decreases significantly (by 30 hPa from 8- to 1-km grid spacing), although this increase in intensity is not uniform across similar reductions in grid spacing, even when pressure fields are interpolated to a common grid. This implies that the additional strengthening of the simulated tropical cyclone (TC) at higher resolution is not attributable to sampling, but is due to changes in the representation of physical processes important to TC intensity. The most apparent changes in simulated TC structure with resolution occur near a grid length of 4 km. At 4-km grid spacing and below, polygonal eyewall segments appear, suggestive of breaking vortex Rossby waves. With sub-4-km grid lengths, localized, intense updraft cores within the eyewall are numerous and both polygonal and circular eyewall shapes appear regularly. Higher-resolution simulations produce a greater variety of shapes, transitioning more frequently between polygonal and circular eyewalls relative to lower-resolution simulations. It is hypothesized that this is because of the ability to resolve a greater range of wavenumbers in high-resolution simulations. Also, as resolution is increased, a broader range of updraft and downdraft velocities is present in the eyewall. These results suggest that grid spacing of 2 km or less is needed for representation of important physical processes in the TC eyewall. Grid-length and domain size suggestions for operational prediction are provided; for operational prediction, a grid length of 3 km or less is recommended.


2020 ◽  
Author(s):  
Eren Duzenli ◽  
Heves Pilatin ◽  
Ismail Yucel ◽  
Berina M. Kilicarslan ◽  
M. Tugrul Yilmaz

&lt;p&gt;Global numerical weather prediction models (NWP) such as the European Centre for Medium-Range Weather Forecasts (ECMWF) and Global Forecast System (GFS) generate atmospheric data for the entire world. However, these models provide the data at large spatiotemporal resolutions because of computational limitations. Weather Research and Forecasting (WRF) Model is one of the models, which is capable of dynamically downscaling the NWP models&amp;#8217; output. In this study, all combinations of 4 microphysics and 3 cumulus parametrization schemes, 2 planetary boundary layers (PBL), 2 initial and lateral boundary conditions and 2 horizontal grid spacing (i.e., an ensemble consisting of 96 different scenarios) are simulated to measure the sensitivity of WRF-derived precipitation against different model configurations. The sensitivity analyses are performed for 4 separate events. These events are selected among the extreme precipitation events in the Mediterranean (MED) and eastern Black Sea (EBLS) regions. For each region, a summer and an autumn event are chosen. Here, the fundamental aim is to determine the spatiotemporal differences in WRF input parameters that yield better outcomes. A total of 72 hours simulations are started 24 hours before the event day to avoid spin-up time error. The model is adjusted to produce hourly precipitation outputs. The relative performance of scenarios is measured using Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method considering 5 categorical validation indices and 4 pairwise statistics calculated between the model estimations and the ground-based precipitation observations. According to the TOPSIS results, microphysics scheme, initial and lateral boundary condition, and horizontal grid spacing are substantially influential on WRF precipitation estimates, while cumulus parameterization has a comparatively low effect. The choice of PBL scheme is essential for the summer events, but the results of the autumn events are independent of PBL selection. WRF products are better for the events of the EBLS basin when ERA5 is used as the initial and lateral boundary condition. On the contrary, GFS is superior in the MED region. In terms of spatial resolution, 9 km horizontal grid spacing is commonly preferable for all the events rather than 3 km. Besides, the model underestimates the area-averaged precipitation amounts except for the MED-autumn incident. Still, the model is successful at catching the peak hours of all events. Moreover, the precipitation detection ability of WRF is better for the autumn months. The probability of detection index is higher than 0.5 at 35% of MED stations and 68% of EBLS stations for the autumn events. The local and convective summer events are investigated considering the event centers. Albeit relatively low relationships are defined for the MED-summer event, a statistically significant correlation is obtained between the central station of the EBLS-summer event and the closest grid for the predictions of 52 scenarios (i.e., 54% of the ensemble).&lt;/p&gt;


Atmosphere ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 1501
Author(s):  
Chung-Chieh Wang ◽  
Chih-Sheng Chang ◽  
Yi-Wen Wang ◽  
Chien-Chang Huang ◽  
Shih-Chieh Wang ◽  
...  

In this study, 24 h quantitative precipitation forecasts (QPFs) by a cloud-resolving model (with a grid spacing of 2.5 km) on days 1–3 for 29 typhoons in six seasons of 2010–2015 in Taiwan were examined using categorical scores and rain gauge data. The study represents an update from a previous study for 2010–2012, in order to produce more stable and robust statistics toward the high thresholds (typically with fewer sample points), which is our main focus of interest. This is important to better understand the model’s ability to predict such high-impact typhoon rainfall events. The overall threat scores (TS, defined as the fraction among all verification points that are correctly predicted to reach a given threshold to all points that are either observed or predicted to reach that threshold, or both) were 0.28 and 0.18 on day 1 (0–24 h) QPFs, 0.25 and 0.16 on day 2 (24–48 h) QPFs, and 0.15 and 0.08 on day 3 (48–72 h) QPFs at 350 mm and 500 mm, respectively, showing improvements over 5 km models. Moreover, as found previously, a strong dependence of higher TSs for larger rainfall events also existed, and the corresponding TSs at 350 and 500 mm for the top 5% of events were 0.39 and 0.25 on day 1, 0.38 and 0.21 on day 2, and 0.25 and 0.12 on day 3. Thus, for the top typhoon rainfall events that have the highest potential for hazards, the model exhibits an even higher ability for QPFs based on categorical scores. Furthermore, it is shown that the model has little tendency to overpredict or underpredict rainfall for all groups of events with different rainfall magnitude across all thresholds, except for some tendency to under-forecast for the largest event group on day 3. Some issues associated with categorical statistics to be aware of are also demonstrated and discussed.


MAUSAM ◽  
2021 ◽  
Vol 60 (2) ◽  
pp. 175-184
Author(s):  
M. MOHAPATRA ◽  
H. R. HATWAR ◽  
S. R. KALSI

India Meteorological Department (IMD) issues heavy rainfall warning for a meteorological sub-division when the expected 24 hours rainfall over any rain gauge station in that sub-division is likely to be 64.5 mm or more. Though these warnings have been provided since long and are also now being issued for smaller spatial scales, very few attempts have been made for quantitative evaluation of these warnings.  Hence, a study is undertaken to verify the heavy rainfall warning over the representative meteorological sub-divisions of east Uttar Pradesh (UP), west UP and Bihar during main monsoon months of July and August. For this purpose data of the recent 5 years (2001-2005) and also for another epoch of 5 years in the early 1970s has been taken into consideration. In this connection, the day when heavy rainfall is recorded over atleast two stations in a sub-division, has been considered as a heavy rainfall day for that sub-division.   This study of verification shows that probability of detection of heavy rainfall is 64% over Bihar, 52% over east UP and 53% over west UP for the recent 5 years. Compared to early 1970s, there has been slight improvement in the forecast skill during 2001-2005 with probability of detection increasing by about 10-20% and with decrease in missing rate and false alarm rate. However, the false alarm rates are still large indicating higher bias towards over-prediction. The synoptic conditions associated with the heavy rainfall events have been collected for the period 2001-05 and analysed. The analysis of the unanticipated heavy rainfall events suggests that though proper interpretation of synoptic charts and NWP outputs could improve the warnings, the forecast system available even today is still not capable to capture every heavy rain event in advance.


2010 ◽  
Vol 27 (3) ◽  
pp. 409-427 ◽  
Author(s):  
Kun Tao ◽  
Ana P. Barros

Abstract The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L ≫ l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (∼25-km grid spacing) to the same resolution as the NCEP stage IV products (∼4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 875
Author(s):  
Li Zhou ◽  
Lin Xu ◽  
Mingcai Lan ◽  
Jingjing Chen

Heavy rainfall events often cause great societal and economic impacts. The prediction ability of traditional extrapolation techniques decreases rapidly with the increase in the lead time. Moreover, deficiencies of high-resolution numerical models and high-frequency data assimilation will increase the prediction uncertainty. To address these shortcomings, based on the hourly precipitation prediction of Global/Regional Assimilation and Prediction System-Cycle of Hourly Assimilation and Forecast (GRAPES-CHAF) and Shanghai Meteorological Service-WRF ADAS Rapid Refresh System (SMS-WARR), we present an improved weighting method of time-lag-ensemble averaging for hourly precipitation forecast which gives more weight to heavy rainfall and can quickly select the optimal ensemble members for forecasting. In addition, by using the cross-magnitude weight (CMW) method, mean absolute error (MAE), root mean square error (RMSE) and correlation coefficient (CC), the verification results of hourly precipitation forecast for next six hours in Hunan Province during the 2019 typhoon Bailu case and heavy rainfall events from April to September in 2020 show that the revised forecast method can more accurately capture the characteristics of the hourly short-range precipitation forecast and improve the forecast accuracy and the probability of detection of heavy rainfall.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


Sign in / Sign up

Export Citation Format

Share Document