scholarly journals Ensemble Tropical Rainfall Potential (eTRaP) Forecasts

2011 ◽  
Vol 26 (2) ◽  
pp. 213-224 ◽  
Author(s):  
Elizabeth E. Ebert ◽  
Michael Turk ◽  
Sheldon J. Kusselson ◽  
Jianbin Yang ◽  
Matthew Seybold ◽  
...  

Abstract Ensemble tropical rainfall potential (eTRaP) has been developed to improve short-range forecasts of heavy rainfall in tropical cyclones. Evolving from the tropical rainfall potential (TRaP), a 24-h rain forecast based on estimated rain rates from microwave sensors aboard polar-orbiting satellites, eTRaP combines all single-pass TRaPs generated within ±3 h of 0000, 0600, 1200, and 1800 UTC to form a simple ensemble. This approach addresses uncertainties in satellite-derived rain rates and spatial rain structures by using estimates from different sensors observing the cyclone at different times. Quantitative precipitation forecasts (QPFs) are produced from the ensemble mean field using a probability matching approach to recalibrate the rain-rate distribution against the ensemble members (e.g., input TRaP forecasts) themselves. ETRaPs also provide probabilistic forecasts of heavy rain, which are potentially of enormous benefit to decision makers. Verification of eTRaP forecasts for 16 Atlantic hurricanes making landfall in the United States between 2004 and 2008 shows that the eTRaP rain amounts are more accurate than single-sensor TRaPs. The probabilistic forecasts have useful skill, but the probabilities should be interpreted within a spatial context. A novel concept of a “radius of uncertainty” compensates for the influence of location error in the probability forecasts. The eTRaPs are produced in near–real time for all named tropical storms and cyclones around the globe. They can be viewed online (http://www.ssd.noaa.gov/PS/TROP/etrap.html) and are available in digital form to users.

2014 ◽  
Vol 142 (11) ◽  
pp. 4108-4138 ◽  
Author(s):  
Russ S. Schumacher ◽  
Adam J. Clark

Abstract This study investigates probabilistic forecasts made using different convection-allowing ensemble configurations for a three-day period in June 2010 when numerous heavy-rain-producing mesoscale convective systems (MCSs) occurred in the United States. These MCSs developed both along a baroclinic zone in the Great Plains, and in association with a long-lived mesoscale convective vortex (MCV) in Texas and Arkansas. Four different ensemble configurations were developed using an ensemble-based data assimilation system. Two configurations used continuously cycled data assimilation, and two started the assimilation 24 h prior to the initialization of each forecast. Each configuration was run with both a single set of physical parameterizations and a mixture of physical parameterizations. These four ensemble forecasts were also compared with an ensemble run in real time by the Center for the Analysis and Prediction of Storms (CAPS). All five of these ensemble systems produced skillful probabilistic forecasts of the heavy-rain-producing MCSs, with the ensembles using mixed physics providing forecasts with greater skill and less overall bias compared to the single-physics ensembles. The forecasts using ensemble-based assimilation systems generally outperformed the real-time CAPS ensemble at lead times of 6–18 h, whereas the CAPS ensemble was the most skillful at forecast hours 24–30, though it also exhibited a wet bias. The differences between the ensemble precipitation forecasts were found to be related in part to differences in the analysis of the MCV and its environment, which in turn affected the evolution of errors in the forecasts of the MCSs. These results underscore the importance of representing model error in convection-allowing ensemble analysis and prediction systems.


MAUSAM ◽  
2022 ◽  
Vol 64 (1) ◽  
pp. 77-82
Author(s):  
HABIBURRAHAMAN BISWAS ◽  
P.K. KUNDU ◽  
D. PRADHAN

caxky dh [kkM+h esa cuus ,oa tehu ls Vdjkus okys pØokrh; rwQkuksa ds  ifj.kkeLo:i  Hkkjh o"kkZ dh otg ls if’pe caxky ds rV lesr Hkkjr ds iwohZ rV ds yksxksa dh tku eky dks dkQh [krjk jgrk gSA tehu ls Vdjkus okys m".kdfVca/kh; pØokrh rwQkuksa dh otg ls gksus okyh o"kkZ dh ek=k dk iwokZuqeku djuk cgqr dfBu gSA m".kdfVca/kh; pØokrh; rwQkuksa ds nk;js esa vkus okys o"kkZ okys {ks=ksa esa laHkkfor pØokrh; rwQku ls gksus okys o"kkZ lap;u dk iwokZuqeku djus ds fy, mixzg ls izkIr o"kkZ njksa dk mi;ksx fd;k tk ldrk gSA bl 'kks/k i= esa ‘vkbyk’ ds m".kdfVca/kh; o"kkZ ekiu fe’ku ¼Vh- vkj- ,e- ,e-½] mixzg o"kkZ nj vk¡dM+ksa rFkk rwQku ds ns[ks x, ekxZ dk mi;ksx djrs gq, m".kdfVca/kh; pØokr ‘vkbyk’ ds tehu ls Vdjkus ls 24 ?kVsa igys rVh; LVs’kuksa ij o"kkZ dk vkdyu djus dk iz;kl fd;k x;k gSA la;qDr jkT; vesfjdk esa fodflr lqifjfpr rduhd ds vk/kkj ij  m".kdfVca/kh; pØokr ‘vkbyk’ ds tehu ls Vdjkus ds 24 ?kaVs igys m".kdfVca/kh; o"kkZ foHko ¼Vh- vkj- ,- ih-½ iwokZuqeku fo’ks"k :i  ls rwQku dh fn’kk ds lkeus vkus okys rVh; {ks=ksa ds fy, vPNh o"kkZ dk iwokZuqeku miyC/k djkrk gSA Major threat to the life and property of people on the east coast of India, including West Bengal Coast, is due to very heavy rainfall from landfalling tropical cyclones originated over Bay of Bengal. Forecasting magnitude of rainfall from landfalling tropical cyclones is very difficult. Satellite derived rain rates over the raining areas of tropical cyclones can be used to forecast potential tropical cyclone rainfall accumulations. In the present study, an attempt has been made to estimate 24 hours rainfall over coastal stations before landfall of tropical Cyclone ‘Aila’ using Tropical Rainfall Measuring Mission (TRMM) satellite rain rates data and observed storm track of Aila. Forecast Tropical Rainfall Potential (TRaP), 24 hours prior to landfall for the tropical cyclone ‘Aila’ based on well known technique developed in USA, provides a good rainfall forecast especially for the coastal areas lying at the head of direction of the storm.


2017 ◽  
Vol 18 (1) ◽  
pp. 28-34 ◽  
Author(s):  
Chandrasekar (Shaker) S. Kousik ◽  
Pingsheng Ji ◽  
Daniel S. Egel ◽  
Lina M. Quesada-Ocampo

About 50% of the watermelons in the United States are produced in the southeastern states, where optimal conditions for development of Phytophthora fruit rot prevail. Phytophthora fruit rot significantly limits watermelon production by causing serious yield losses before and after fruit harvest. Efficacy of fungicide rotation programs and Melcast-scheduled sprays for managing Phytophthora fruit rot was determined by conducting experiments in Phytophthora capsici-infested fields at three locations in southeastern United States (North Carolina, South Carolina, and Georgia). The mini seedless cultivar Wonder and seeded cultivar Mickey Lee (pollenizer) were used. Five weekly applications of fungicides were made at all locations. Significant fruit rot (53 to 91%, mean 68%) was observed in the nontreated control plots in all three years (2013 to 2015) and across locations. All fungicide rotation programs significantly reduced Phytophthora fruit rot compared with nontreated controls. Overall, the rotation of Zampro alternated with Orondis was highly effective across three locations and two years. Rotations of Actigard followed by Ranman+Ridomil Gold, Presidio, V-10208, and Orondis, or rotation of Revus alternated with Presidio were similarly effective. Use of Melcast, a melon disease-forecasting tool, may occasionally enable savings of one spray application without significantly impacting control. Although many fungicides are available for use in rotations, under very heavy rain and pathogen pressure, the fungicides alone may not offer adequate protection; therefore, an integrated approach should be used with other management options including well-drained fields.


Author(s):  
Martin Campbell-Kelly

In October 1945 Alan Turing was recruited by the National Physical Laboratory to lead computer development. His design for a computer, the Automatic Computing Engine (ACE), was idiosyncratic but highly effective. The small-scale Pilot ACE, completed in 1950, was the fastest medium-sized computer of its era. By the time that the full-sized ACE was operational in 1958, however, technological advance had rendered it obsolescent. Although the wartime Bletchley Park operation saw the development of the electromechanical codebreaking bombe (specified by Turing) and the electronic Colossus (to which Turing was a bystander), these inventions had no direct impact on the invention of the electronic storedprogram computer, which originated in the United States. The stored-program computer was described in the classic ‘First draft of a report on the EDVAC’, written by John von Neumann on behalf of the computer group at the Moore School of Electrical Engineering, University of Pennsylvania, in June 1945. The report was the outcome of a series of discussions commencing in the summer of 1944 between von Neumann and the inventors of the ENIAC computer—John Presper Eckert, John W. Mauchly, and others. ENIAC was an electronic computer designed primarily for ballistics calculations: in practice, the machine was limited to the integration of ordinary differential equations and it had several other design shortcomings, including a vast number of electronic tubes (18,000) and a tiny memory of just twenty numbers. It was also very time-consuming to program. The EDVAC design grew out of an attempt to remedy these shortcomings. The most novel concept in the EDVAC, which gave it the description ‘stored program’, was the decision to store both instructions and numbers in the same memory. It is worth noting that during 1936 Turing became a research student of Alonzo Church at Princeton University. Turing came to know von Neumann, who was a founding professor of the Institute for Advanced Study (IAS) in Princeton and was fully aware of Turing’s 1936 paper ‘On computable numbers’. Indeed, von Neumann was sufficiently impressed with it that he invited Turing to become his research assistant at the IAS, but Turing decided to return to England and subsequently spent the war years at Bletchley Park.


2014 ◽  
Vol 15 (3) ◽  
pp. 1135-1151 ◽  
Author(s):  
Youcun Qi ◽  
Jian Zhang ◽  
Brian Kaney ◽  
Carrie Langston ◽  
Kenneth Howard

Abstract Quantitative precipitation estimation (QPE) in the West Coast region of the United States has been a big challenge for Weather Surveillance Radar-1988 Doppler (WSR-88D) because of severe blockages caused by the complex terrain. The majority of the heavy precipitation in the West Coast region is associated with strong moisture flux from the Pacific that interacts with the coastal mountains. Such orographic enhancement of precipitation occurs at low levels and cannot be observed well by WSR-88D because of severe blockages. Specifically, the radar beam either samples too high above the ground or misses the orographic enhancement at lower levels, or the beam broadens with range and cannot adequately resolve vertical variations of the reflectivity structure. The current study developed an algorithm that uses S-band Precipitation Profiler (S-PROF) radar observations in northern California to improve WSR-88D QPEs in the area. The profiler data are used to calculate two sets of reference vertical profiles of reflectivity (RVPRs), one for the coastal mountains and another for the Sierra Nevada. The RVPRs are then used to correct the WSR-88D QPEs in the corresponding areas. The S-PROF–based VPR correction methodology (S-PROF-VPR) has taken into account orographic processes and radar beam broadenings with range. It is tested using three heavy rain events and is found to provide significant improvements over the operational radar QPE.


2011 ◽  
Vol 51 (1) ◽  
pp. 411
Author(s):  
Noll Moriarty

Accurate forecasts for medium-term commodity prices are essential for resource companies committing to large capital expenditures. The inaccuracy of conventional forecasting methods is well known because they tend to be extrapolations of the current price trend. The inevitable reversal catches many by surprise. This paper demonstrates that medium-term (2–5 years) commodity prices are not strongly linked to economic health and commodity demand-supply, but are instead inversely controlled by supply-demand for the United States dollar (USD) and consequent valuation. P90, P50 and P10 projection bounds for future valuation of the USD are presented based on the successful probabilistic techniques of the petroleum exploration industry. This allows probabilistic projections for the oil price, which is inversely related to the USD valuation. I show that the USD is significantly undervalued at present. Probabilistic projection of the USD valuation indicates that likely appreciation will put downward pressure on commodity prices for the next 2–5 years. If the USD premise is correct, likely appreciation of the dollar during the next 2–5 years will hold stable, or even decrease, oil price to around USD $50 BBL. This is a contrary expectation to most forecasts—one which, if it eventuates, should give cause for reflection before committing to large capital expenditures. Further investigation could examine the extent to which the USD valuation can be modelled as a fractal phenomenon. If so, it would mean the USD valuation is not driven by conventional economic fundamentals; instead, it is a semi-random number series with serial correlation. If true, probabilistic forecasts of the USD can be significantly improved, hence that of medium-term commodity prices.


Author(s):  
Junyi Lu ◽  
Sebastian Meyer

Accurate prediction of flu activity enables health officials to plan disease prevention and allocate treatment resources. A promising forecasting approach is to adapt the well-established endemic-epidemic modeling framework to time series of infectious disease proportions. Using U.S. influenza-like illness surveillance data over 18 seasons, we assessed probabilistic forecasts of this new beta autoregressive model with proper scoring rules. Other readily available forecasting tools were used for comparison, including Prophet, (S)ARIMA and kernel conditional density estimation (KCDE). Short-term flu activity was equally well predicted up to four weeks ahead by the beta model with four autoregressive lags and by KCDE; however, the beta model runs much faster. Non-dynamic Prophet scored worst. Relative performance differed for seasonal peak prediction. Prophet produced the best peak intensity forecasts in seasons with standard epidemic curves; otherwise, KCDE outperformed all other methods. Peak timing was best predicted by SARIMA, KCDE or the beta model, depending on the season. The best overall performance when predicting peak timing and intensity was achieved by KCDE. Only KCDE and naive historical forecasts consistently outperformed the equal-bin reference approach for all test seasons. We conclude that the endemic-epidemic beta model is a performant and easy-to-implement tool to forecast flu activity a few weeks ahead. Real-time forecasting of the seasonal peak, however, should consider outputs of multiple models simultaneously, weighing their usefulness as the season progresses.


2005 ◽  
Vol 20 (4) ◽  
pp. 465-475 ◽  
Author(s):  
Ralph Ferraro ◽  
Paul Pellegrino ◽  
Michael Turk ◽  
Wanchun Chen ◽  
Shuang Qiu ◽  
...  

Abstract Satellite analysts at the Satellite Services Division (SSD) of the National Environmental, Satellite, Data, and Information Service (NESDIS) routinely generate 24-h rainfall potential for all tropical systems that are expected to make landfall within 24 to at most 36 h and are of tropical storm or greater strength (>65 km h−1). These estimates, known as the tropical rainfall potential (TRaP), are generated in an objective manner by taking instantaneous rainfall estimates from passive microwave sensors, advecting this rainfall pattern along the predicted storm track, and accumulating rainfall over the next 24 h. In this study, the TRaPs generated by SSD during the 2002 Atlantic hurricane season have been validated using National Centers for Environmental Prediction (NCEP) stage IV hourly rainfall estimates. An objective validation package was used to generate common statistics such as correlation, bias, root-mean-square error, etc. It was found that by changing the minimum rain-rate threshold, the results could be drastically different. It was determined that a minimum threshold of 25.4 mm day−1 was appropriate for use with TRaP. By stratifying the data by different criteria, it was discovered that the TRaPs generated using Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) rain rates, with its optimal set of measurement frequencies, improved spatial resolution, and advanced retrieval algorithm, produced the best results. In addition, the best results were found for TRaPs generated for storms that were between 12 and 18 h from landfall. Since the TRaP is highly dependent on the forecast track of the storm, selected TRaPs were rerun using the observed track contained in the NOAA/Tropical Prediction Center (TPC) “best track.” Although some TRaPs were not significantly improved by using this best track, significant improvements were realized in some instances. Finally, as a benchmark for the usefulness of TRaP, comparisons were made to Eta Model 24-h precipitation forecasts as well as three climatological maximum rainfall methods. It was apparent that the satellite-based TRaP outperforms the Eta Model in virtually every statistical category, while the climatological methods produced maximum rainfall totals closer to the stage IV maximum amounts when compared with TRaP, although these methods are for storm totals while TRaP is for a 24-h period.


2005 ◽  
Vol 22 (4) ◽  
pp. 365-380 ◽  
Author(s):  
David B. Wolff ◽  
D. A. Marks ◽  
E. Amitai ◽  
D. S. Silberstein ◽  
B. L. Fisher ◽  
...  

Abstract An overview of the Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) Program is presented. This ground validation (GV) program is based at NASA Goddard Space Flight Center in Greenbelt, Maryland, and is responsible for processing several TRMM science products for validating space-based rain estimates from the TRMM satellite. These products include gauge rain rates, and radar-estimated rain intensities, type, and accumulations, from four primary validation sites (Kwajalein Atoll, Republic of the Marshall Islands; Melbourne, Florida; Houston, Texas; and Darwin, Australia). Site descriptions of rain gauge networks and operational weather radar configurations are presented together with the unique processing methodologies employed within the Ground Validation System (GVS) software packages. Rainfall intensity estimates are derived using the Window Probability Matching Method (WPMM) and then integrated over specified time scales. Error statistics from both dependent and independent validation techniques show good agreement between gauge-measured and radar-estimated rainfall. A comparison of the NASA GV products and those developed independently by the University of Washington for a subset of data from the Kwajalein Atoll site also shows good agreement. A comparison of NASA GV rain intensities to satellite retrievals from the TRMM Microwave Imager (TMI), precipitation radar (PR), and Combined (COM) algorithms is presented, and it is shown that the GV and satellite estimates agree quite well over the open ocean.


2004 ◽  
Vol 43 (11) ◽  
pp. 1586-1597 ◽  
Author(s):  
Hye-Kyung Cho ◽  
Kenneth P. Bowman ◽  
Gerald R. North

Abstract This study investigates the spatial characteristics of nonzero rain rates to develop a probability density function (PDF) model of precipitation using rainfall data from the Tropical Rainfall Measuring Mission (TRMM) satellite. The minimum χ2 method is used to find a good estimator for the rain-rate distribution between the gamma and lognormal distributions, which are popularly used in the simulation of the rain-rate PDF. Results are sensitive to the choice of dynamic range, but both the gamma and lognormal distributions match well with the PDF of rainfall data. Comparison with sample means shows that the parametric mean from the lognormal distribution overestimates the sample mean, whereas the gamma distribution underestimates it. These differences are caused by the inflated tail in the lognormal distribution and the small shape parameter in the gamma distribution. If shape constraint is given, the difference between the sample mean and the parametric mean from the fitted gamma distribution decreases significantly, although the resulting χ2 values slightly increase. Of interest is that a consistent regional preference between two test functions is found. The gamma fits outperform the lognormal fits in wet regions, whereas the lognormal fits are better than the gamma fits for dry regions. Results can be improved with a specific model assumption depending on mean rain rates, but the results presented in this study can be easily applied to develop the rainfall retrieval algorithm and to find the proper statistics in the rainfall data.


Sign in / Sign up

Export Citation Format

Share Document