scholarly journals Random Sampling Many-Dimensional Sets Arising in Control

Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 580
Author(s):  
Pavel Shcherbakov ◽  
Mingyue Ding ◽  
Ming Yuchi

Various Monte Carlo techniques for random point generation over sets of interest are widely used in many areas of computational mathematics, optimization, data processing, etc. Whereas for regularly shaped sets such sampling is immediate to arrange, for nontrivial, implicitly specified domains these techniques are not easy to implement. We consider the so-called Hit-and-Run algorithm, a representative of the class of Markov chain Monte Carlo methods, which became popular in recent years. To perform random sampling over a set, this method requires only the knowledge of the intersection of a line through a point inside the set with the boundary of this set. This component of the Hit-and-Run procedure, known as boundary oracle, has to be performed quickly when applied to economy point representation of many-dimensional sets within the randomized approach to data mining, image reconstruction, control, optimization, etc. In this paper, we consider several vector and matrix sets typically encountered in control and specified by linear matrix inequalities. Closed-form solutions are proposed for finding the respective points of intersection, leading to efficient boundary oracles; they are generalized to robust formulations where the system matrices contain norm-bounded uncertainty.

Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.


2018 ◽  
Vol 10 (10) ◽  
pp. 4-19
Author(s):  
Magomed G. GADZHIYEV ◽  
◽  
Misrikhan Sh. MISRIKHANOV ◽  
Vladimir N. RYABCHENKO ◽  
◽  
...  

2014 ◽  
Vol 6 (1) ◽  
pp. 1006-1015
Author(s):  
Negin Shagholi ◽  
Hassan Ali ◽  
Mahdi Sadeghi ◽  
Arjang Shahvar ◽  
Hoda Darestani ◽  
...  

Medical linear accelerators, besides the clinically high energy electron and photon beams, produce other secondary particles such as neutrons which escalate the delivered dose. In this study the neutron dose at 10 and 18MV Elekta linac was obtained by using TLD600 and TLD700 as well as Monte Carlo simulation. For neutron dose assessment in 2020 cm2 field, TLDs were calibrated at first. Gamma calibration was performed with 10 and 18 MV linac and neutron calibration was done with 241Am-Be neutron source. For simulation, MCNPX code was used then calculated neutron dose equivalent was compared with measurement data. Neutron dose equivalent at 18 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 3.3, 4, 5 and 6 cm. Neutron dose at depths of less than 3.3cm was zero and maximized at the depth of 4 cm (44.39 mSvGy-1), whereas calculation resulted  in the maximum of 2.32 mSvGy-1 at the same depth. Neutron dose at 10 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 2.5, 3.3, 4 and 5 cm. No photoneutron dose was observed at depths of less than 3.3cm and the maximum was at 4cm equal to 5.44mSvGy-1, however, the calculated data showed the maximum of 0.077mSvGy-1 at the same depth. The comparison between measured photo neutron dose and calculated data along the beam axis in different depths, shows that the measurement data were much more than the calculated data, so it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry in linac central axis due to high photon flux, whereas MCNPX Monte Carlo techniques still remain a valuable tool for photonuclear dose studies.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 662
Author(s):  
Mateu Sbert ◽  
Jordi Poch ◽  
Shuning Chen ◽  
Víctor Elvira

In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.


Author(s):  
Abbas Zabihi Zonouz ◽  
Mohammad Ali Badamchizadeh ◽  
Amir Rikhtehgar Ghiasi

In this paper, a new method for designing controller for linear switching systems with varying delay is presented concerning the Hurwitz-Convex combination. For stability analysis the Lyapunov-Krasovskii function is used. The stability analysis results are given based on the linear matrix inequalities (LMIs), and it is possible to obtain upper delay bound that guarantees the stability of system by solving the linear matrix inequalities. Compared with the other methods, the proposed controller can be used to get a less conservative criterion and ensures the stability of linear switching systems with time-varying delay in which delay has way larger upper bound in comparison with the delay bounds that are considered in other methods. Numerical examples are given to demonstrate the effectiveness of proposed method.


Author(s):  
Grienggrai Rajchakit ◽  
Ramalingam Sriraman ◽  
Rajendran Samidurai

Abstract This article discusses the dissipativity analysis of stochastic generalized neural network (NN) models with Markovian jump parameters and time-varying delays. In practical applications, most of the systems are subject to stochastic perturbations. As such, this study takes a class of stochastic NN models into account. To undertake this problem, we first construct an appropriate Lyapunov–Krasovskii functional with more system information. Then, by employing effective integral inequalities, we derive several dissipativity and stability criteria in the form of linear matrix inequalities that can be checked by the MATLAB LMI toolbox. Finally, we also present numerical examples to validate the usefulness of the results.


2020 ◽  
Vol 26 (1) ◽  
pp. 1-16
Author(s):  
Kevin Vanslette ◽  
Abdullatif Al Alsheikh ◽  
Kamal Youcef-Toumi

AbstractWe motive and calculate Newton–Cotes quadrature integration variance and compare it directly with Monte Carlo (MC) integration variance. We find an equivalence between deterministic quadrature sampling and random MC sampling by noting that MC random sampling is statistically indistinguishable from a method that uses deterministic sampling on a randomly shuffled (permuted) function. We use this statistical equivalence to regularize the form of permissible Bayesian quadrature integration priors such that they are guaranteed to be objectively comparable with MC. This leads to the proof that simple quadrature methods have expected variances that are less than or equal to their corresponding theoretical MC integration variances. Separately, using Bayesian probability theory, we find that the theoretical standard deviations of the unbiased errors of simple Newton–Cotes composite quadrature integrations improve over their worst case errors by an extra dimension independent factor {\propto N^{-\frac{1}{2}}}. This dimension independent factor is validated in our simulations.


Sign in / Sign up

Export Citation Format

Share Document