Predicting Spring Tornado Activity in the Central Great Plains by 1 March

2014 ◽  
Vol 142 (1) ◽  
pp. 259-267 ◽  
Author(s):  
James B. Elsner ◽  
Holly M. Widen

Abstract The authors illustrate a statistical model for predicting tornado activity in the central Great Plains by 1 March. The model predicts the number of tornado reports during April–June using February sea surface temperature (SST) data from the Gulf of Alaska (GAK) and the western Caribbean Sea (WCA). The model uses a Bayesian formulation where the likelihood on the counts is a negative binomial distribution and where the nonstationarity in tornado reporting is included as a trend term plus first-order autocorrelation. Posterior densities for the model parameters are generated using the method of integrated nested Laplacian approximation (INLA). The model yields a 51% increase in the number of tornado reports per degree Celsius increase in SST over the WCA and a 15% decrease in the number of reports per degree Celsius increase in SST over the GAK. These significant relationships are broadly consistent with a physical understanding of large-scale atmospheric patterns conducive to severe convective storms across the Great Plains. The SST covariates explain 11% of the out-of-sample variability in observed F1–F5 tornado reports. The paper demonstrates the utility of INLA for fitting Bayesian models to tornado climate data.

2020 ◽  
Vol 66 (256) ◽  
pp. 175-187 ◽  
Author(s):  
David R. Rounce ◽  
Tushar Khurana ◽  
Margaret B. Short ◽  
Regine Hock ◽  
David E. Shean ◽  
...  

AbstractThe response of glaciers to climate change has major implications for sea-level change and water resources around the globe. Large-scale glacier evolution models are used to project glacier runoff and mass loss, but are constrained by limited observations, which result in models being over-parameterized. Recent systematic geodetic mass-balance observations provide an opportunity to improve the calibration of glacier evolution models. In this study, we develop a calibration scheme for a glacier evolution model using a Bayesian inverse model and geodetic mass-balance observations, which enable us to quantify model parameter uncertainty. The Bayesian model is applied to each glacier in High Mountain Asia using Markov chain Monte Carlo methods. After 10,000 steps, the chains generate a sufficient number of independent samples to estimate the properties of the model parameters from the joint posterior distribution. Their spatial distribution shows a clear orographic effect indicating the resolution of climate data is too coarse to resolve temperature and precipitation at high altitudes. Given the glacier evolution model is over-parameterized, particular attention is given to identifiability and the need for future work to integrate additional observations in order to better constrain the plausible sets of model parameters.


2019 ◽  
Vol 5 (1) ◽  
pp. eaat7854 ◽  
Author(s):  
Peng Wang ◽  
Ru Kong ◽  
Xiaolu Kong ◽  
Raphaël Liégeois ◽  
Csaba Orban ◽  
...  

We considered a large-scale dynamical circuit model of human cerebral cortex with region-specific microscale properties. The model was inverted using a stochastic optimization approach, yielding markedly better fit to new, out-of-sample resting functional magnetic resonance imaging (fMRI) data. Without assuming the existence of a hierarchy, the estimated model parameters revealed a large-scale cortical gradient. At one end, sensorimotor regions had strong recurrent connections and excitatory subcortical inputs, consistent with localized processing of external stimuli. At the opposing end, default network regions had weak recurrent connections and excitatory subcortical inputs, consistent with their role in internal thought. Furthermore, recurrent connection strength and subcortical inputs provided complementary information for differentiating the levels of the hierarchy, with only the former showing strong associations with other macroscale and microscale proxies of cortical hierarchies (meta-analysis of cognitive functions, principal resting fMRI gradient, myelin, and laminar-specific neuronal density). Overall, this study provides microscale insights into a macroscale cortical hierarchy in the dynamic resting brain.


Climate ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 20
Author(s):  
Kleoniki Demertzi ◽  
Vassilios Pisinaras ◽  
Emanuel Lekakis ◽  
Evangelos Tziritis ◽  
Konstantinos Babakos ◽  
...  

Simple formulas for estimating annual actual evapotranspiration (AET) based on annual climate data are widely used in large scale applications. Such formulas do not have distinct compartments related to topography, soil and irrigation, and for this reason may be limited in basins with high slopes, where runoff is the dominant water balance component, and in basins where irrigated agriculture is dominant. Thus, a simplistic method for assessing AET in both natural ecosystems and agricultural systems considering the aforementioned elements is proposed in this study. The method solves AET through water balance based on a set of formulas that estimate runoff and percolation. These formulas are calibrated by the results of the deterministic hydrological model GLEAMS (Groundwater Loading Effects of Agricultural Management Systems) for a reference surface. The proposed methodology is applied to the country of Greece and compared with the widely used climate-based methods of Oldekop, Coutagne and Turk. The results show that the proposed methodology agrees very well with the method of Turk for the lowland regions but presents significant differences in places where runoff is expected to be very high (sloppy areas and areas of high rainfall, especially during December–February), suggesting that the proposed method performs better due to its runoff compartment. The method can also be applied in a single application considering irrigation only for the irrigated lands to more accurately estimate AET in basins with a high percentage of irrigated agriculture.


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4638
Author(s):  
Simon Pratschner ◽  
Pavel Skopec ◽  
Jan Hrdlicka ◽  
Franz Winter

A revolution of the global energy industry is without an alternative to solving the climate crisis. However, renewable energy sources typically show significant seasonal and daily fluctuations. This paper provides a system concept model of a decentralized power-to-green methanol plant consisting of a biomass heating plant with a thermal input of 20 MWth. (oxyfuel or air mode), a CO2 processing unit (DeOxo reactor or MEA absorption), an alkaline electrolyzer, a methanol synthesis unit, an air separation unit and a wind park. Applying oxyfuel combustion has the potential to directly utilize O2 generated by the electrolyzer, which was analyzed by varying critical model parameters. A major objective was to determine whether applying oxyfuel combustion has a positive impact on the plant’s power-to-liquid (PtL) efficiency rate. For cases utilizing more than 70% of CO2 generated by the combustion, the oxyfuel’s O2 demand is fully covered by the electrolyzer, making oxyfuel a viable option for large scale applications. Conventional air combustion is recommended for small wind parks and scenarios using surplus electricity. Maximum PtL efficiencies of ηPtL,Oxy = 51.91% and ηPtL,Air = 54.21% can be realized. Additionally, a case study for one year of operation has been conducted yielding an annual output of about 17,000 t/a methanol and 100 GWhth./a thermal energy for an input of 50,500 t/a woodchips and a wind park size of 36 MWp.


2014 ◽  
Vol 09 (02) ◽  
pp. 1440001 ◽  
Author(s):  
MARC S. PAOLELLA

Simple, fast methods for modeling the portfolio distribution corresponding to a non-elliptical, leptokurtic, asymmetric, and conditionally heteroskedastic set of asset returns are entertained. Portfolio optimization via simulation is demonstrated, and its benefits are discussed. An augmented mixture of normals model is shown to be superior to both standard (no short selling) Markowitz and the equally weighted portfolio in terms of out of sample returns and Sharpe ratio performance.


2004 ◽  
Vol 5 (6) ◽  
pp. 1247-1258 ◽  
Author(s):  
Christopher P. Weaver

Abstract This is Part II of a two-part study of mesoscale land–atmosphere interactions in the summertime U.S. Southern Great Plains. Part I focused on case studies drawn from monthlong (July 1995–97), high-resolution Regional Atmospheric Modeling System (RAMS) simulations carried out to investigate these interactions. These case studies were chosen to highlight key features of the lower-tropospheric mesoscale circulations that frequently arise in this region and season due to mesoscale heterogeneity in the surface fluxes. In this paper, Part II, the RAMS-simulated mesoscale dynamical processes described in the Part I case studies are examined from a domain-averaged perspective to assess their importance in the overall regional hydrometeorology. The spatial statistics of key simulated mesoscale variables—for example, vertical velocity and the vertical flux of water vapor—are quantified here. Composite averages of the mesoscale and large-scale-mean variables over different meteorological or dynamical regimes are also calculated. The main finding is that, during dry periods, or similarly, during periods characterized by large-scale-mean subsidence, the characteristic signature of surface-heterogeneity-forced mesoscale circulations, including enhanced vertical motion variability and enhanced mesoscale fluxes in the lowest few kilometers of the atmosphere, consistently emerges. Furthermore, the impact of these mesoscale circulations is nonnegligible compared to the large-scale dynamics at domain-averaged (200 km × 200 km) spatial scales and weekly to monthly time scales. These findings support the hypothesis that the land– atmosphere interactions associated with mesoscale surface heterogeneity can provide pathways whereby diurnal, mesoscale atmospheric processes can scale up to have more general impacts at larger spatial scales and over longer time scales.


2000 ◽  
Vol 663 ◽  
Author(s):  
J. Samper ◽  
R. Juncosa ◽  
V. Navarro ◽  
J. Delgado ◽  
L. Montenegro ◽  
...  

ABSTRACTFEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project dealing with the bentonite engineered barrier designed for sealing and containment of waste in a high level radioactive waste repository (HLWR). It includes two main experiments: an situ full-scale test performed at Grimsel (GTS) and a mock-up test operating since February 1997 at CIEMAT facilities in Madrid (Spain) [1,2,3]. One of the objectives of FEBEX is the development and testing of conceptual and numerical models for the thermal, hydrodynamic, and geochemical (THG) processes expected to take place in engineered clay barriers. A significant improvement in coupled THG modeling of the clay barrier has been achieved both in terms of a better understanding of THG processes and more sophisticated THG computer codes. The ability of these models to reproduce the observed THG patterns in a wide range of THG conditions enhances the confidence in their prediction capabilities. Numerical THG models of heating and hydration experiments performed on small-scale lab cells provide excellent results for temperatures, water inflow and final water content in the cells [3]. Calculated concentrations at the end of the experiments reproduce most of the patterns of measured data. In general, the fit of concentrations of dissolved species is better than that of exchanged cations. These models were later used to simulate the evolution of the large-scale experiments (in situ and mock-up). Some thermo-hydrodynamic hypotheses and bentonite parameters were slightly revised during TH calibration of the mock-up test. The results of the reference model reproduce simultaneously the observed water inflows and bentonite temperatures and relative humidities. Although the model is highly sensitive to one-at-a-time variations in model parameters, the possibility of parameter combinations leading to similar fits cannot be precluded. The TH model of the “in situ” test is based on the same bentonite TH parameters and assumptions as for the “mock-up” test. Granite parameters were slightly modified during the calibration process in order to reproduce the observed thermal and hydrodynamic evolution. The reference model captures properly relative humidities and temperatures in the bentonite [3]. It also reproduces the observed spatial distribution of water pressures and temperatures in the granite. Once calibrated the TH aspects of the model, predictions of the THG evolution of both tests were performed. Data from the dismantling of the in situ test, which is planned for the summer of 2001, will provide a unique opportunity to test and validate current THG models of the EBS.


Author(s):  
Ari Kettunen ◽  
Timo Hyppa¨nen ◽  
Ari-Pekka Kirkinen ◽  
Esa Maikkola

The main objective of this study was to investigate the load change capability and effect of the individual control variables, such as fuel, primary air and secondary air flow rates, on the dynamics of large-scale CFB boilers. The dynamics of the CFB process were examined by dynamic process tests and by simulation studies. A multi-faceted set of transient process tests were performed at a commercial 235 MWe CFB unit. Fuel reactivity and interaction between gas flow rates, solid concentration profiles and heat transfer were studied by step changes of the following controllable variables: fuel feed rate, primary air flow rate, secondary air flow rate and primary to secondary air flow ratio. Load change performance was tested using two different types of tests: open and closed loop load changes. A tailored dynamic simulator for the CFB boiler was built and fine-tuned by determining the model parameters and by validating the models of each process component against measured process data of the transient test program. The know-how about the boiler dynamics obtained from the model analysis and the developed CFB simulator were utilized in designing the control systems of three new 262 MWe CFB units, which are now under construction. Further, the simulator was applied for the control system development and transient analysis of the supercritical OTU CFB boiler.


2021 ◽  
Author(s):  
Jessica Fayne ◽  
Huilin Huang ◽  
Mike Fischella ◽  
Yufei Liu ◽  
Zhaoxin Ban ◽  
...  

<p>Extreme precipitation, a critical factor in flooding, has selectively increased with warmer temperatures in the Western U.S. Despite this, the streamflow measurements have captured no noticeable increase in large-scale flood frequency or intensity. As flood studies have mostly focused on specific flood events in particular areas, analyses of large-scale floods and their changes have been scarce. For floods during 1960-2013, we identify six flood generating mechanisms (FGMs) that are prominent across the Western U.S., including atmospheric rivers and non-atmospheric rivers, monsoons, convective storms, radiation-driven snowmelt, and rain-on-snow, in order to identify to what extent different types of floods are changing based on the dominant FGM. The inconsistency between extreme precipitation and lack of flood increase suggests that the impact of climate change on flood risk has been modulated by hydro-meteorological and physiographic processes such as sharp increases in temperature that drive increased evapotranspiration and decreased soil moisture. Our results emphasize the importance of FGMs in understanding the complex interactions of flooding and climatic changes and explain the broad spatiotemporal changes that have occurred across the vast Western U.S. for the past 50 years.</p>


Sign in / Sign up

Export Citation Format

Share Document