statistical expectation
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 7)

H-INDEX

4
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Yiming Bian ◽  
Yan Li ◽  
Erhu Chen ◽  
Wei Li ◽  
Xiaobin Hong ◽  
...  

Abstract Benefiting from the rapid development of fiber-optic devices, high-speed free-space optical communication systems have recently used fiber-optic components. The received laser beam in such a system couples into single-mode fiber (SMF) at the input of the receiver module. This work is oriented to common problems in actual free-space optical coupling systems, such as atmospheric turbulence, optical system aberration, and fiber positioning error. We derive the statistical expectation models of SMF coupling efficiency with optical system aberration in the presence of atmospheric turbulence and the statistical expectation models of SMF coupling efficiency with fiber positioning error in the presence of atmospheric turbulence. The influences of optical system aberration and fiber positioning error on the coupling efficiency under different turbulence strengths are also analyzed in this paper.


2021 ◽  
pp. 1-5
Author(s):  
Xianliang Gong ◽  
Yulin Pan

Abstract The authors of the discussed paper simplified the information-based acquisition on estimating statistical expectation and developed analytical computation for each involved quantity under uniform input distribution. In this discussion, we show that (1) the last three terms of the acquisition always add up to zero, leaving a concise form with a much more intuitive interpretation of the acquisition; (2) the analytical computation of the acquisition can be generalized to arbitrary input distribution, greatly broadening the application of the developed framework.


Author(s):  
Ali Muhammad Ali Rushdi ◽  
Motaz Hussain Amashah

This paper deals with the reliability of a multi-state delivery network (MSDN) with multiple suppliers, transfer stations and markets (depicted as vertices), connected by branches of multi-state capacities, delivering a certain commodity or service between their end vertices. We utilize a symbolic logic expression of the network success to satisfy the market demand within budget and production capacity limitations even when subject to deterioration. This system success is a two-valued function expressed in terms of multi-valued component successes, and it has been obtained in the literature in minimal form as the disjunction of prime implicants or minimal paths of the pertinent network. The main contribution of this paper is to provide a systematic procedure for converting this minimal expression into a probability-ready expression (PRE). We successfully extrapolate the PRE concept from the two-valued logical domain to the multi-valued logical domain. This concept is of paramount importance since it allows a direct transformation of a random logical expression, on a one-to-one one, to its statistical expectation form, simply by replacing all logic variables by their statistical expectations, and also substituting arithmetic multiplication and addition for their logical counterparts (ANDing and ORing). The statistical expectation of the expression is its probability of being equal to 1, and is simply called network reliability. The proposed method is illustrated with a detailed symbolic example of a real-case study, and it produces a more precise version of the same numerical value that was obtained earlier by an alternative means. This paper is a part of an ongoing activity to develop pedagogical material for various candidate techniques for assessing multi-state reliability.


Author(s):  
Daniel Hulse ◽  
Christopher Hoyle ◽  
Irem Y. Tumer ◽  
Kai Goebel ◽  
Chetan Kulkarni

Abstract Resilience models assess a system’s ability to withstand disruption by quantifying the value of metrics (e.g. expected cost or loss) over time. When such a metric is the result of injecting faults in a dynamic model over an interval of time, it is important that it represent the statistical expectation of fault responses rather than a single response. Since fault responses vary over fault injection times, representing the statistical expectation of responses requires sampling a number of points. However, fault models are often built around computationally expensive dynamic simulations, and it is desirable to be able to iterate over designs as quickly as possible to improve system resilience. With this in mind, this paper explores approaches to sample fault injection times to minimize computational cost while accurately representing the expectation of fault resilience metrics over the set possible occurrence times. Two general approaches are presented: an a priori approach that attempts to minimize error without knowing the underlying cost function, and an a posteriori approach that minimizes error when the cost function is known. Among a priori methods, numerical integration minimizes error and computational time compared to Monte Carlo sampling, however both are prone to error when the metric’s fault response curve is discontinuous. While a posteriori approaches can locate and correct for these discontinuities, the resulting error reduction is not robust to design changes that shift the underlying location of discontinuities. The ultimate decision to use an a priori or a posteriori approach to quantify resilience is thus dependent on a number of considerations, including computational cost, the robustness of the approximation to design changes, and the underlying form of the resilience function.


2020 ◽  
Author(s):  
Philip D. Gingerich

ABSTRACTThe zero-force evolutionary law (ZFEL) of McShea et al. states that independently evolving entities, with no forces or constraints acting on them, will tend to accumulate differences and therefore diverge from each other. McShea et al. quantified the law by assuming normality on an additive arithmetic scale and reflecting negative differences as absolute values, systematically augmenting perceived divergence. The appropriate analytical framework is not additive but proportional, where logarithmic transformation is required to achieve normality. Logarithms and logarithmic differences can be negative but the proportions they represent cannot be negative. Reformulation of ZFEL in a proportional or geometric reference frame indicates that when entities evolve randomly and independently, differences smaller than any initial difference are balanced by differences larger than the initial difference. Total variance increases with each step of a random walk, but there is no statistical expectation of divergence between random-walk lineages.


2019 ◽  
Vol 104 (10) ◽  
pp. 1421-1435 ◽  
Author(s):  
Murat T. Tamer ◽  
Ling Chung ◽  
Richard A. Ketcham ◽  
Andrew J.W. Gleadow

Abstract Previous inter-laboratory experiments on confined fission-track length measurements in apatite have consistently reported variation substantially in excess of statistical expectation. There are two primary causes for this variation: (1) differences in laboratory procedures and instrumentation, and (2) personal differences in perception and assessment between analysts. In this study, we narrow these elements down to two categories, etching procedure and analyst bias. We assembled a set of eight samples with induced tracks from four apatite varieties, initially irradiated between 2 and 43 years prior to etching. Two mounts were made containing aliquots of each sample to ensure identical etching conditions for all apatites on a mount. We employed two widely used etching protocols, 5.0 M HNO3 at 20 °C for 20 s and 5.5 M HNO3 at 21 °C for 20 s. Sets of track images were then captured by an automated system and exchanged between two analysts, so that measurements could be carried out on the same tracks and etch figures, in the same image data, allowing us to isolate and examine the effects of analyst bias. An additional 5 s of etching was then used to evaluate etching behavior at track tips. In total, 8391 confined fission-track length measurements were performed; along with 1480 etch figure length measurements. When the analysts evaluated each other's track selections within the same images for suitability for measurement, the average rejection rate was ~14%. For tracks judged as suitable by both analysts, measurements of 2D and 3D length, dip, and c-axis angle were in excellent agreement, with slightly less dispersion when using the 5.5 M etch. Lengths were shorter in the 5.0 M etched mount than the 5.5 M etched one, which we interpret to be caused by more prevalent under-etching in the former, at least for some apatite compositions. After an additional 5 s of etching, 5.0 M tracks saw greater lengthening and more reduction in dispersion than 5.5 M tracks, additional evidence that they were more likely to be under-etched after the initial etching step. Systematic differences between analysts were minimal, with the main exception being likelihood of observing tracks near perpendicular to the crystallographic c axis, which may reflect different use of transmitted vs. reflected light when scanning for tracks. Etch figure measurements were more consistent between analysts for the 5.5 M etch, though one apatite variety showed high dispersion for both. Within a given etching protocol, each sample reflected a decrease of mean track length with time since irradiation, giving evidence of 0.2–0.3 μm of annealing over year to decade timescales.


2019 ◽  
Vol 141 (10) ◽  
Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Abstract Bayesian optimal design of experiments (BODEs) have been successful in acquiring information about a quantity of interest (QoI) which depends on a black-box function. BODE is characterized by sequentially querying the function at specific designs selected by an infill-sampling criterion. However, most current BODE methods operate in specific contexts like optimization, or learning a universal representation of the black-box function. The objective of this paper is to design a BODE for estimating the statistical expectation of a physical response surface. This QoI is omnipresent in uncertainty propagation and design under uncertainty problems. Our hypothesis is that an optimal BODE should be maximizing the expected information gain in the QoI. We represent the information gain from a hypothetical experiment as the Kullback–Liebler (KL) divergence between the prior and the posterior probability distributions of the QoI. The prior distribution of the QoI is conditioned on the observed data, and the posterior distribution of the QoI is conditioned on the observed data and a hypothetical experiment. The main contribution of this paper is the derivation of a semi-analytic mathematical formula for the expected information gain about the statistical expectation of a physical response. The developed BODE is validated on synthetic functions with varying number of input-dimensions. We demonstrate the performance of the methodology on a steel wire manufacturing problem.


2015 ◽  
Vol 37 (3) ◽  
pp. 316-326 ◽  
Author(s):  
Dennis Riedl ◽  
Andreas Heuer ◽  
Bernd Strauss

Incentives guide human behavior by altering the level of external motivation. We apply the idea of loss aversion from prospect theory (Kahneman & Tversky, 1979) to the point reward systems in soccer and investigate the controversial impact of the three-point rule on reducing the fraction of draws in this sport. Making use of the Poisson nature of goal scoring, we compared empirical results with theoretically deduced draw ratios from 24 countries encompassing 20 seasons each (N = 118.148 matches). The rule change yielded a slight reduction in the ratio of draws, but despite adverse incentives, still 18% more matches ended drawn than expected, t(23) = 11.04, p < .001, d = 2.25, consistent with prospect theory assertions. Alternative point systems that manipulated incentives for losses yielded reductions at or below statistical expectation. This provides support for the deduced concept of how arbitrary aims, such as the reduction of draws in the world’s soccer leagues, could be more effectively accomplished than currently attempted.


2014 ◽  
Vol 651-653 ◽  
pp. 1282-1286
Author(s):  
Zhi Li ◽  
Qi Zhang

Intercity railway passengers often have to transfer as no though train between their origin and destination. To choose the optimal transfer scheme is not easy to them. In this article, the transfer waiting time and the total journey time are considered as influencing factors. Based on the statistical expectation of these factors, the transfer weight function is defined and used to find the optimal scheme. This function can be used in developing railway information system.


Sign in / Sign up

Export Citation Format

Share Document