scholarly journals Modulation of homogeneous space-time rainfall cascades to account for orographic influences

2006 ◽  
Vol 6 (3) ◽  
pp. 427-437 ◽  
Author(s):  
M. G. Badas ◽  
R. Deidda ◽  
E. Piga

Abstract. The development of efficient space-time rainfall downscaling procedures is highly important for the implementation of a meteo-hydrological forecasting chain operating over small watersheds. Multifractal models based on homogeneous cascade have been successfully applied in literature to reproduce space-time rainfall events retrieved over ocean, where the hypothesis of spatial homogeneity can be reasonably accepted. The feasibility to apply this kind of models to rainfall fields occurring over a mountainous region, where spatial homogeneity may not hold, is herein investigated. This issue is examined through the analysis of rainfall data retrieved by the high temporal resolution rain gage network of the Sardinian Hydrological Survey. The proposed procedure involves the introduction of a modulating function which is superimposed to homogeneous and isotropic synthetic fields to take into account the spatial heterogeneity detected in observed precipitation events. Specifically the modulating function, which reproduces the differences in local mean values of the precipitation intensity probability distribution, has been linearly related to the terrain elevation of the analysed spatial domain. Comparisons performed between observed and synthetic data show how the proposed procedure preserves the observed rainfall fields features and how the introduction of the modulating function improves the reproduction of spatial heterogeneity in rainfall probability distributions.

2005 ◽  
Vol 2 ◽  
pp. 285-292 ◽  
Author(s):  
M. G. Badas ◽  
R. Deidda ◽  
E. Piga

Abstract. The problem of rainfall downscaling in a mountainous region is discussed, and a simple methodology aimed at introducing spatial heterogeneity induced by orography in downscaling models is proposed. This procedure was calibrated and applied to rainfall data retrieved by the high temporal resolution rain gage network of the Sardinian Hydrological Survey.


2003 ◽  
Vol 66 (10) ◽  
pp. 1900-1910 ◽  
Author(s):  
VALERIE J. DAVIDSON ◽  
JOANNE RYKS

The objective of food safety risk assessment is to quantify levels of risk for consumers as well as to design improved processing, distribution, and preparation systems that reduce exposure to acceptable limits. Monte Carlo simulation tools have been used to deal with the inherent variability in food systems, but these tools require substantial data for estimates of probability distributions. The objective of this study was to evaluate the use of fuzzy values to represent uncertainty. Fuzzy mathematics and Monte Carlo simulations were compared to analyze the propagation of uncertainty through a number of sequential calculations in two different applications: estimation of biological impacts and economic cost in a general framework and survival of Campylobacter jejuni in a sequence of five poultry processing operations. Estimates of the proportion of a population requiring hospitalization were comparable, but using fuzzy values and interval arithmetic resulted in more conservative estimates of mortality and cost, in terms of the intervals of possible values and mean values, compared to Monte Carlo calculations. In the second application, the two approaches predicted the same reduction in mean concentration (−4 log CFU/ml of rinse), but the limits of the final concentration distribution were wider for the fuzzy estimate (−3.3 to 5.6 log CFU/ml of rinse) compared to the probability estimate (−2.2 to 4.3 log CFU/ml of rinse). Interval arithmetic with fuzzy values considered all possible combinations in calculations and maximum membership grade for each possible result. Consequently, fuzzy results fully included distributions estimated by Monte Carlo simulations but extended to broader limits. When limited data defines probability distributions for all inputs, fuzzy mathematics is a more conservative approach for risk assessment than Monte Carlo simulations.


2014 ◽  
Author(s):  
Andreas Tuerk ◽  
Gregor Wiktorin ◽  
Serhat Güler

Quantification of RNA transcripts with RNA-Seq is inaccurate due to positional fragment bias, which is not represented appropriately by current statistical models of RNA-Seq data. This article introduces the Mix2(rd. "mixquare") model, which uses a mixture of probability distributions to model the transcript specific positional fragment bias. The parameters of the Mix2model can be efficiently trained with the Expectation Maximization (EM) algorithm resulting in simultaneous estimates of the transcript abundances and transcript specific positional biases. Experiments are conducted on synthetic data and the Universal Human Reference (UHR) and Brain (HBR) sample from the Microarray quality control (MAQC) data set. Comparing the correlation between qPCR and FPKM values to state-of-the-art methods Cufflinks and PennSeq we obtain an increase in R2value from 0.44 to 0.6 and from 0.34 to 0.54. In the detection of differential expression between UHR and HBR the true positive rate increases from 0.44 to 0.71 at a false positive rate of 0.1. Finally, the Mix2model is used to investigate biases present in the MAQC data. This reveals 5 dominant biases which deviate from the common assumption of a uniform fragment distribution. The Mix2software is available at http://www.lexogen.com/fileadmin/uploads/bioinfo/mix2model.tgz.


2019 ◽  
Vol 623 ◽  
pp. A156 ◽  
Author(s):  
H. E. Delgado ◽  
L. M. Sarro ◽  
G. Clementini ◽  
T. Muraveva ◽  
A. Garofalo

In a recent study we analysed period–luminosity–metallicity (PLZ) relations for RR Lyrae stars using theGaiaData Release 2 (DR2) parallaxes. It built on a previous work that was based on the firstGaiaData Release (DR1), and also included period–luminosity (PL) relations for Cepheids and RR Lyrae stars. The method used to infer the relations fromGaiaDR2 data and one of the methods used forGaiaDR1 data was based on a Bayesian model, the full description of which was deferred to a subsequent publication. This paper presents the Bayesian method for the inference of the parameters ofPL(Z) relations used in those studies, the main feature of which is to manage the uncertainties on observables in a rigorous and well-founded way. The method encodes the probability relationships between the variables of the problem in a hierarchical Bayesian model and infers the posterior probability distributions of thePL(Z) relationship coefficients using Markov chain Monte Carlo simulation techniques. We evaluate the method with several semi-synthetic data sets and apply it to a sample of 200 fundamental and first-overtone RR Lyrae stars for whichGaiaDR1 parallaxes and literatureKs-band mean magnitudes are available. We define and test several hyperprior probabilities to verify their adequacy and check the sensitivity of the solution with respect to the prior choice. The main conclusion of this work, based on the test with semi-syntheticGaiaDR1 parallaxes, is the absolute necessity of incorporating the existing correlations between the period, metallicity, and parallax measurements in the form of model priors in order to avoid systematically biased results, especially in the case of non-negligible uncertainties in the parallaxes. The relation coefficients obtained here have been superseded by those presented in our recent paper that incorporates the findings of this work and the more recentGaiaDR2 measurements.


2020 ◽  
pp. 0271678X2091928
Author(s):  
Alessandra Caporale ◽  
Hyunyeol Lee ◽  
Hui Lei ◽  
Hengyi Rao ◽  
Michael C Langham ◽  
...  

During slow-wave sleep, synaptic transmissions are reduced with a concomitant reduction in brain energy consumption. We used 3 Tesla MRI to noninvasively quantify changes in the cerebral metabolic rate of O2 (CMRO2) during wakefulness and sleep, leveraging the ‘OxFlow’ method, which provides venous O2 saturation (SvO2) along with cerebral blood flow (CBF). Twelve healthy subjects (31.3 ± 5.6 years, eight males) underwent 45–60 min of continuous scanning during wakefulness and sleep, yielding one image set every 3.4 s. Concurrent electroencephalography (EEG) data were available in eight subjects. Mean values of the metabolic parameters measured during wakefulness were stable, with coefficients of variation below 7% (average values: CMRO2 = 118 ± 12 µmol O2/min/100 g, SvO2 = 67.0 ± 3.7% HbO2, CBF = 50.6 ±4.3 ml/min/100 g). During sleep, on average, CMRO2 decreased 21% (range: 14%–32%; average nadir = 98 ± 16 µmol O2/min/100 g), while EEG slow-wave activity, expressed in terms of [Formula: see text]-power, increased commensurately. Following sleep onset, CMRO2 was found to correlate negatively with relative [Formula: see text]-power (r = −0.6 to −0.8, P < 0.005), and positively with heart rate (r = 0.5 to 0.8, P < 0.0005). The data demonstrate that OxFlow MRI can noninvasively measure dynamic changes in cerebral metabolism associated with sleep, which should open new opportunities to study sleep physiology in health and disease.


Computation ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 51
Author(s):  
Alireza Sahebgharani ◽  
Mahmoud Mohammadi ◽  
Hossein Haghshenas

Space-time prism (STP) is a comprehensive and powerful model for computing accessibility to urban opportunities. Despite other types of accessibility measures, STP models capture spatial and temporal dimensions in a unified framework. Classical STPs assume that travel time in street networks is a deterministic and fixed variable. However, this assumption is in contradiction with the uncertain nature of travel time taking place due to fluctuations and traffic congestion. In addition, travel time in street networks mostly follows non-normal probability distributions which are not modeled in the structure of classical STPs. Neglecting travel time uncertainty and disregarding different types of probability distributions cause unrealistic accessibility values in STP-based metrics. In this way, this paper proposes a spatiotemporal accessibility model by extending classical STPs to non-normal stochastic urban networks and blending this modified STP with the attractiveness of urban opportunities. The elaborated model was applied on the city of Isfahan to assess the accessibility of its traffic analysis zones (TAZs) to Kowsar discount retail markets. A significant difference was found between the results of accessibility values in normally and non-normally distributed networks. In addition, the results show that the northern TAZs had larger accessibility level compared to the southern ones.


Geophysics ◽  
1986 ◽  
Vol 51 (2) ◽  
pp. 332-346 ◽  
Author(s):  
Daniel H. Rothman

Conventional approaches to residual statics estimation obtain solutions by performing linear inversion of observed traveltime deviations. A crucial component of these procedures is picking time delays; gross errors in these picks are known as “cycle skips” or “leg jumps” and are the bane of linear traveltime inversion schemes. This paper augments Rothman (1985), which demonstrated that the estimation of large statics in noise‐contaminated data is posed better as a nonlinear, rather than as a linear, inverse problem. Cycle skips then appear as local (secondary) minima of the resulting nonlinear optimization problem. In the earlier paper, a Monte Carlo technique from statistical mechanics was adapted to perform global optimization, and the technique was applied to synthetic data. Here I present an application of a similar Monte Carlo method to field data from the Wyoming Overthrust belt. Key changes, however, have led to a more efficient and practical algorithm. The new technique performs explicit crosscorrelation of traces. Instead of picking the peaks of these crosscorrelation functions, the method transforms the crosscorrelation functions to probability distributions and then draws random numbers from the distributions. Estimates of statics are now iteratively updated by this procedure until convergence to the optimal stack is achieved. Here I also derive several theoretical properties of the algorithm. The method is expressed as a Markov chain, in which the equilibrium (steady‐state) distribution is the Gibbs distribution of statistical mechanics.


2019 ◽  
Vol 12 (11) ◽  
pp. 6091-6111 ◽  
Author(s):  
Laura M. Judd ◽  
Jassim A. Al-Saadi ◽  
Scott J. Janz ◽  
Matthew G. Kowalewski ◽  
R. Bradley Pierce ◽  
...  

Abstract. NASA deployed the GeoTASO airborne UV–visible spectrometer in May–June 2017 to produce high-resolution (approximately 250 m×250 m) gapless NO2 datasets over the western shore of Lake Michigan and over the Los Angeles Basin. The results collected show that the airborne tropospheric vertical column retrievals compare well with ground-based Pandora spectrometer column NO2 observations (r2=0.91 and slope of 1.03). Apparent disagreements between the two measurements can be sensitive to the coincidence criteria and are often associated with large local variability, including rapid temporal changes and spatial heterogeneity that may be observed differently by the sunward-viewing Pandora observations. The gapless mapping strategy executed during the 2017 GeoTASO flights provides data suitable for averaging to coarser areal resolutions to simulate satellite retrievals. As simulated satellite pixel area increases to values typical of TEMPO (Tropospheric Emissions: Monitoring Pollution), TROPOMI (TROPOspheric Monitoring Instrument), and OMI (Ozone Monitoring Instrument), the agreement with Pandora measurements degraded, particularly for the most polluted columns as localized large pollution enhancements observed by Pandora and GeoTASO are spatially averaged with nearby less-polluted locations within the larger area representative of the satellite spatial resolutions (aircraft-to-Pandora slope: TEMPO scale =0.88; TROPOMI scale =0.77; OMI scale =0.57). In these two regions, Pandora and TEMPO or TROPOMI have the potential to compare well at least up to pollution scales of 30×1015 molecules cm−2. Two publicly available OMI tropospheric NO2 retrievals are found to be biased low with respect to these Pandora observations. However, the agreement improves when higher-resolution a priori inputs are used for the tropospheric air mass factor calculation (NASA V3 standard product slope =0.18 and Berkeley High Resolution product slope =0.30). Overall, this work explores best practices for satellite validation strategies with Pandora direct-sun observations by showing the sensitivity to product spatial resolution and demonstrating how the high-spatial-resolution NO2 data retrieved from airborne spectrometers, such as GeoTASO, can be used with high-temporal-resolution ground-based column observations to evaluate the influence of spatial heterogeneity on validation results.


2013 ◽  
Vol 2013 ◽  
pp. 1-12
Author(s):  
Lev V. Utkin ◽  
Yulia A. Zhuk

A method for solving a classification problem when there is only partial information about some features is proposed. This partial information comprises the mean values of features for every class and the bounds of the features. In order to maximally exploit the available information, a set of probability distributions is constructed such that two distributions are selected from the set which define the minimax and minimin strategies. Random values of features are generated in accordance with the selected distributions by using the Monte Carlo technique. As a result, the classification problem is reduced to the standard model which is solved by means of the support vector machine. Numerical examples illustrate the proposed method.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3283
Author(s):  
Mustafa Demircioglu ◽  
Herwig Bruneel ◽  
Sabine Wittevrongel

Queueing models with disasters can be used to evaluate the impact of a breakdown or a system reset in a service facility. In this paper, we consider a discrete-time single-server queueing system with general independent arrivals and general independent service times and we study the effect of the occurrence of disasters on the queueing behavior. Disasters occur independently from time slot to time slot according to a Bernoulli process and result in the simultaneous removal of all customers from the queueing system. General probability distributions are allowed for both the number of customer arrivals during a slot and the length of the service time of a customer (expressed in slots). Using a two-dimensional Markovian state description of the system, we obtain expressions for the probability, generating functions, the mean values, variances and tail probabilities of both the system content and the sojourn time of an arbitrary customer under a first-come-first-served policy. The customer loss probability due to a disaster occurrence is derived as well. Some numerical illustrations are given.


Sign in / Sign up

Export Citation Format

Share Document