scholarly journals Using Fractal Downscaling of Satellite Precipitation Products for Hydrometeorological Applications

2010 ◽  
Vol 27 (3) ◽  
pp. 409-427 ◽  
Author(s):  
Kun Tao ◽  
Ana P. Barros

Abstract The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L ≫ l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (∼25-km grid spacing) to the same resolution as the NCEP stage IV products (∼4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied.

2010 ◽  
Vol 11 (4) ◽  
pp. 966-978 ◽  
Author(s):  
Kenneth J. Tobin ◽  
Marvin E. Bennett

Abstract Significant concern has been expressed regarding the ability of satellite-based precipitation products such as the National Aeronautics and Space Administration (NASA) Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 products (version 6) and the U.S. National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center’s (CPC) morphing technique (CMORPH) to accurately capture rainfall values over land. Problems exist in terms of bias, false-alarm rate (FAR), and probability of detection (POD), which vary greatly worldwide and over the conterminous United States (CONUS). This paper directly addresses these concerns by developing a methodology that adjusts existing TMPA products utilizing ground-based precipitation data. The approach is not a simple bias adjustment but a three-step process that transforms a satellite precipitation product. Ground-based precipitation is used to develop a filter eliminating FAR in the authors’ adjusted product. The probability distribution function (PDF) of the satellite-based product is adjusted to the PDF of the ground-based product, minimizing bias. Failure of precipitation detection (POD) is addressed by utilizing a ground-based product during these periods in their adjusted product. This methodology has been successfully applied in the hydrological modeling of the San Pedro basin in Arizona for a 3-yr time series, yielding excellent streamflow simulations at a daily time scale. The approach can be applied to any satellite precipitation product (i.e., TRMM 3B42 version 7) and will provide a useful approach to quantifying precipitation in regions with limited ground-based precipitation monitoring.


2016 ◽  
Vol 17 (4) ◽  
pp. 1101-1117 ◽  
Author(s):  
Viviana Maggioni ◽  
Patrick C. Meyers ◽  
Monique D. Robinson

Abstract A great deal of expertise in satellite precipitation estimation has been developed during the Tropical Rainfall Measuring Mission (TRMM) era (1998–2015). The quantification of errors associated with satellite precipitation products (SPPs) is crucial for a correct use of these datasets in hydrological applications, climate studies, and water resources management. This study presents a review of previous work that focused on validating SPPs for liquid precipitation during the TRMM era through comparisons with surface observations, both in terms of mean errors and detection capabilities across different regions of the world. Several SPPs have been considered: TMPA 3B42 (research and real-time products), CPC morphing technique (CMORPH), Global Satellite Mapping of Precipitation (GSMaP; both the near-real-time and the Motion Vector Kalman filter products), PERSIANN, and PERSIANN–Cloud Classification System (PERSIANN-CCS). Topography, seasonality, and climatology were shown to play a role in the SPP’s performance, especially in terms of detection probability and bias. Regions with complex terrain exhibited poor rain detection and magnitude-dependent mean errors; low probability of detection was reported in semiarid areas. Winter seasons, usually associated with lighter rain events, snow, and mixed-phase precipitation, showed larger biases.


2010 ◽  
Vol 4 (1) ◽  
pp. 12-23 ◽  
Author(s):  
Md. Nazrul Islam ◽  
Someshwar Das ◽  
Hiroshi Uyeda

In this study rainfall is calculated from Tropical Rainfall Measuring Mission (TRMM) Version 6 (V6) 3B42 datasets and calibrated with reference to the observed daily rainfall by rain-gauge collected at 15 locations over Nepal during 1998-2007. In monthly, seasonal and annual scales TRMM estimated rainfalls follow the similar distribution of historical patterns obtained from the rain-gauge data. Rainfall is large in the Southern parts of the country, especially in the Central Nepal. Day-to-day rainfall comparison shows that TRMM derived trend is very similar to the observed data but TRMM usually underestimates rainfall on many days with some exceptions of overestimation on some days. The correlation coefficient of rainfalls between TRMM and rain-gauge data is obtained about 0.71. TRMM can measure about 65.39% of surface rainfall in Nepal. After using calibration factors obtained through regression expression the TRMM estimated rainfall over Nepal becomes about 99.91% of observed data. TRMM detection of rainy days is poor over Nepal; it can approximately detect, under-detect and over-detect by 19%, 72% and 9% of stations respectively. False alarm rate, probability of detection, threat score and skill score are calculated as 0.30, 0.68, 0.53 and 0.55 respectively. Finally, TRMM data can be utilized in measuring mountainous rainfall over Nepal but exact amount of rainfall has to be calculated with the help of adjustment factors obtained through calibration procedure. This preliminary work is the preparation of utilization of Global Precipitation Measurement (GPM) data to be commencing in 2013.


2006 ◽  
Vol 7 ◽  
pp. 85-90
Author(s):  
M. Casaioli ◽  
S. Mariani ◽  
C. Accadia ◽  
M. Gabella ◽  
S. Michaelides ◽  
...  

Abstract. In the framework of the European VOLTAIRE project (Fifth Framework Programme), simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM) forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM) satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA) analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.


2005 ◽  
Vol 20 (6) ◽  
pp. 918-930 ◽  
Author(s):  
Agostino Manzato

Abstract The relative operating characteristic (ROC) diagram is often used to assess the performance of a classification system, like the categorical forecast of an event occurrence. Categorical forecasting can be obtained by imposing a threshold on a continuous variable in order to make it dichotomous. In practice this threshold could be varied to create different contingency tables. From each table, it is then possible to derive many statistical indices and skill scores, which are functions of the chosen threshold. The ROC curve is obtained by plotting two of these indices: probability of detection (POD) versus probability of false detection (POFD). In this work a simple approximation for another of these indices, the odds ratio (O), is proposed. Thus, O is parameterized as a function of POFD and that leads to a parameterization of all the theoretical ROC curves. Using this approximation, it is also possible to derive the theoretical maximum Hanssen and Kuipers skill score (KSS) and the theoretical maximum Heidke skill score (HSS), for each ROC. It is found that the maximum HSS depends explicitly on the database event frequency (α), while the KSS seems independent of it. Out of the approximation framework, some general properties of ROC points corresponding to the maximum KSS, to the maximum HSS, and to the BIAS = 1 condition have also been found. It is also suggested that many of these performance measures are influenced by the event frequency, which must be taken into account when comparing classifiers made for different databases. Another interesting outcome of this study is that it is shown how the KSS is also equitable (in the sense introduced by Gandin and Murphy) for a generic “cost ratio” (λ) between miss and false alarm cases, not only for the original case λ = 1.


2008 ◽  
Vol 23 (5) ◽  
pp. 931-952 ◽  
Author(s):  
John S. Kain ◽  
Steven J. Weiss ◽  
David R. Bright ◽  
Michael E. Baldwin ◽  
Jason J. Levit ◽  
...  

Abstract During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels. Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18–30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts. In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill—the scale of mesoscale convective features—the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.


2008 ◽  
Vol 25 (11) ◽  
pp. 1901-1920 ◽  
Author(s):  
Ana P. Barros ◽  
Kun Tao

Abstract A space-filling algorithm (SFA) based on 2D spectral estimation techniques was developed to extrapolate the spatial domain of the narrow-swath near-instantaneous rain-rate estimates from Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and TRMM Microwave Imager (TMI) using thermal infrared imagery (Meteosat-5) without making use of calibration or statistical fitting. A comparison against rain gauge observations and the original PR 2A25 and TMI 2A12 estimates in the central Himalayas during the monsoon season (June–September) over a 3-yr period of 1999–2001 was conducted to assess the algorithm’s performance. Evaluation over the continental United States was conducted against the NCEP stage IV combined radar and gauge analysis for selected events. Overall, the extrapolated PR and TMI rainfall fields derived using SFA exhibit skill comparable to the original TRMM estimates. The results indicate that probability of detection and threat scores of the reconstructed products are significantly better than the original PR data at high-elevation stations (>2000 m) on mountain ridges, and specifically for rainfall rates exceeding 2–5 mm h−1 and for afternoon convection. For low-elevation stations located in steep narrow valleys, the performance varies from year to year and deteriorates strongly for light rainfall (false alarm rates significantly increase). A preliminary comparison with other satellite products (e.g., 3B42, a TRMM-adjusted merged infrared-based rainfall product) suggests that integrating this algorithm in currently existing operational multisensor algorithms has the potential to improve significantly spatial resolution, texture, and detection of rainfall, especially in mountainous regions, which present some of the greatest challenges in precipitation retrieval from satellites over land, and for hydrological operations during extreme events.


2013 ◽  
Vol 6 (1) ◽  
pp. 1269-1310 ◽  
Author(s):  
T. Zinner ◽  
C. Forster ◽  
E. de Coning ◽  
H.-D. Betz

Abstract. In this manuscript, recent changes to the DLR METEOSAT thunderstorm TRacking And Monitoring algorithm (Cb-TRAM) are presented as well as a validation of Cb-TRAM against the European ground-based LIghtning NETwork data (LINET) of Nowcast GmbH and Lightning Detection Network (LDN) data of the South African Weather Service (SAWS). The validation is conducted along the well known skill scores probability of detection (POD) and false alarm ratio (FAR) on the basis of METEOSAT/SEVIRI pixels as well as on the basis of thunderstorm objects. The values obtained demonstrate the limits of Cb-TRAM in specific as well as the limits of satellite methods in general which are based on thermal emission and solar reflectivity information from thunderstorm tops. Although the climatic conditions and the occurence of thunderstorms is quite different for Europe and South Africa, the quality score values are similar. Our conclusion is that Cb-TRAM provides robust results of well-defined quality for very different climatic regimes. The POD for a thunderstorm with intense lightning is about 80% during the day. The FAR for a Cb-TRAM detected thunderstorm which is not at least close to intense lightning activity is about 50%; if the proximity to any lightning activity is evaluated the FAR is even much lower at about 15%. Pixel-based analysis shows that the detected thunderstorm object size is not indiscriminately large, but well within the physical limitations of the method. Nighttime POD and FAR are somewhat worse as the detection scheme can not use high resolution visible information. Nowcasting scores show useful values up to approximatelly 30 min.


2014 ◽  
Vol 18 (7) ◽  
pp. 2645-2656 ◽  
Author(s):  
T. C. Pagano

Abstract. This study created a 13-year historical archive of operational flood forecasts issued by the Regional Flood Management and Mitigation Center (RFMMC) of the Mekong River Commission. The RFMMC issues 1- to 5-day daily deterministic river height forecasts for 22 locations throughout the wet season (June–October). When these forecasts reach near flood level, government agencies and the public are encouraged to take protective action against damages. When measured by standard skill scores, the forecasts perform exceptionally well (e.g., 1 day-ahead Nash–Sutcliffe > 0.99) although much of this apparent skill is due to the strong seasonal cycle and the narrow natural range of variability at certain locations. Five-day forecasts upstream of Phnom Penh typically have 0.8 m error standard deviation, whereas below Phnom Penh the error is typically 0.3 m. The coefficients of persistence for 1-day forecasts are typically 0.4–0.8 and 5-day forecasts are typically 0.1–0.7. RFMMC uses a series of benchmarks to define a metric of percentage satisfactory forecasts. As the benchmarks were derived based on the average error, certain locations and lead times consistently appear less satisfactory than others. Instead, different benchmarks were proposed and derived based on the 70th percentile of absolute error over the 13-year period. There are no obvious trends in the percentage of satisfactory forecasts from 2002 to 2012, regardless of the benchmark chosen. Finally, when evaluated from a categorical "crossing above/not-crossing above flood level" perspective, the forecasts have a moderate probability of detection (48% at 1 day ahead, 31% at 5 days ahead) and false alarm rate (13% at 1 day ahead, 74% at 5 days ahead).


MAUSAM ◽  
2022 ◽  
Vol 73 (1) ◽  
pp. 83-90
Author(s):  
PIYUSH JOSHI ◽  
M.S. SHEKHAR ◽  
ASHAVANI KUMAR ◽  
J.K. QUAMARA

Kalpana satellite images in real time available by India meteorological department (IMD), contain relevant inputs about the cloud in infra-red (IR), water vapor (WV), and visible (VIS) bands. In the present study an attempt has been made to forecast precipitation at six stations in western Himalaya by using extracted grey scale values of IR and WV images. The extracted pixel values at a location are trained for the corresponding precipitation at that location. The precipitation state at 0300 UTC is considered to train the model for precipitation forecast with 24 hour lead time. The satellite images acquired in IR (10.5 - 12.5 µm) and WV (5.7 - 7.1 µm) bands have been used for developing Artificial Neural Network (ANN) model for qualitative as well as quantitative precipitation forecast. The model results are validated with ground observations and skill scores are computed to check the potential of the model for operational purpose. The probability of detection at the six stations varies from 0.78 for Gulmarg in Pir-Panjal range to 0.95 for Dras in Greater Himalayan range. Overall performance for qualitative forecast is in the range from 61% to 84%. Root mean square error for different locations under study is in the range 5.81 to 8.7.


Sign in / Sign up

Export Citation Format

Share Document