scholarly journals Truncating the exponential with a uniform distribution

Author(s):  
Rafael Weißbach ◽  
Dominik Wied

AbstractFor a sample of Exponentially distributed durations we aim at point estimation and a confidence interval for its parameter. A duration is only observed if it has ended within a certain time interval, determined by a Uniform distribution. Hence, the data is a truncated empirical process that we can approximate by a Poisson process when only a small portion of the sample is observed, as is the case for our applications. We derive the likelihood from standard arguments for point processes, acknowledging the size of the latent sample as the second parameter, and derive the maximum likelihood estimator for both. Consistency and asymptotic normality of the estimator for the Exponential parameter are derived from standard results on M-estimation. We compare the design with a simple random sample assumption for the observed durations. Theoretically, the derivative of the log-likelihood is less steep in the truncation-design for small parameter values, indicating a larger computational effort for root finding and a larger standard error. In applications from the social and economic sciences and in simulations, we indeed, find a moderately increased standard error when acknowledging truncation.

2016 ◽  
Vol 35 (3) ◽  
pp. 261-267 ◽  
Author(s):  
Lei Gan ◽  
Chaobin Lai ◽  
Huihui Xiong

AbstractThe accuracies of molten slag viscosity fitting and low-temperature extrapolation were compared between four two-variable models: Arrhenius, Weymann–Frenkel (WF), and Vogel–Fulcher–Tammann (VFT) and Mauro, Yue, Ellison, Gupta and Allan (MYEGA) models with constant pre-exponential parameter, based on a molten slag viscosity database consisting of over 800 compositions and 5,000 measurements. It is found that over wide ranges of pre-exponential parameter, the VFT and MYEGA models have lower viscosity fitting errors and much higher low-temperature viscosity extrapolation accuracies than Arrhenius and WF models. The pre-exponential parameter values of –2.8 for VFT and –2.3 for MYEGA are recommended.


2020 ◽  
Vol 1 (4) ◽  
pp. 229-238
Author(s):  
Devi Munandar ◽  
Sudradjat Supian ◽  
Subiyanto Subiyanto

The influence of social media in disseminating information, especially during the COVID-19 pandemic, can be observed with time interval, so that the probability of number of tweets discussed by netizens on social media can be observed. The nonhomogeneous Poisson process (NHPP) is a Poisson process dependent on time parameters and the exponential distribution having unequal parameter values and, independently of each other. The probability of no occurrence an event in the initial state is one and the probability of an event in initial state is zero. Using of non-homogeneous Poisson in this paper aims to predict and count the number of tweet posts with the keyword coronavirus, COVID-19 with set time intervals every day. Posting of tweets from one time each day to the next do not affect each other and the number of tweets is not the same. The dataset used in this study is crawling of COVID-19 tweets three times a day with duration of 20 minutes each crawled for 13 days or 39 time intervals. The result of this study obtained predictions and calculated for the probability of the number of tweets for the tendency of netizens to post on the situation of the COVID-19 pandemic.


2008 ◽  
Vol 20 (5) ◽  
pp. 1325-1343 ◽  
Author(s):  
Zbyněk Pawlas ◽  
Lev B. Klebanov ◽  
Martin Prokop ◽  
Petr Lansky

We study the estimation of statistical moments of interspike intervals based on observation of spike counts in many independent short time windows. This scenario corresponds to the situation in which a target neuron occurs. It receives information from many neurons and has to respond within a short time interval. The precision of the estimation procedures is examined. As the model for neuronal activity, two examples of stationary point processes are considered: renewal process and doubly stochastic Poisson process. Both moment and maximum likelihood estimators are investigated. Not only the mean but also the coefficient of variation is estimated. In accordance with our expectations, numerical studies confirm that the estimation of mean interspike interval is more reliable than the estimation of coefficient of variation. The error of estimation increases with increasing mean interspike interval, which is equivalent to decreasing the size of window (less events are observed in a window) and with decreasing the number of neurons (lower number of windows).


1978 ◽  
Vol 10 (3) ◽  
pp. 613-632 ◽  
Author(s):  
Harry M. Pierson

Starting with a stationary point process on the line with points one unit apart, simultaneously replace each point by a point located uniformly between the original point and its right-hand neighbor. Iterating this transformation, we obtain convergence to a limiting point process, which we are able to identify. The example of the uniform distribution is for purposes of illustration only; in fact, convergence is obtained for almost any distribution on [0, 1]. In the more general setting, we prove the limiting distribution is invariant under the above transformation, and that for each such transformation, a large class of initial processes leads to the same invariant distribution. We also examine the covariance of the limiting sequence of interval lengths. Finally, we identify those invariant distributions with independent interval lengths, and the transformations from which they arise.


2014 ◽  
Vol 30 (3) ◽  
pp. 521-532 ◽  
Author(s):  
Phillip S. Kott ◽  
C. Daniel Day

Abstract This article describes a two-step calibration-weighting scheme for a stratified simple random sample of hospital emergency departments. The first step adjusts for unit nonresponse. The second increases the statistical efficiency of most estimators of interest. Both use a measure of emergency-department size and other useful auxiliary variables contained in the sampling frame. Although many survey variables are roughly a linear function of the measure of size, response is better modeled as a function of the log of that measure. Consequently the log of size is a calibration variable in the nonresponse-adjustment step, while the measure of size itself is a calibration variable in the second calibration step. Nonlinear calibration procedures are employed in both steps. We show with 2010 DAWN data that estimating variances as if a one-step calibration weighting routine had been used when there were in fact two steps can, after appropriately adjusting the finite-population correct in some sense, produce standard-error estimates that tend to be slightly conservative.


2001 ◽  
Vol 38 (02) ◽  
pp. 554-569 ◽  
Author(s):  
John L. Spouge

Consider a renewal process. The renewal events partition the process into i.i.d. renewal cycles. Assume that on each cycle, a rare event called 'success’ can occur. Such successes lend themselves naturally to approximation by Poisson point processes. If each success occurs after a random delay, however, Poisson convergence may be relatively slow, because each success corresponds to a time interval, not a point. In 1996, Altschul and Gish proposed a finite-size correction to a particular approximation by a Poisson point process. Their correction is now used routinely (about once a second) when computers compare biological sequences, although it lacks a mathematical foundation. This paper generalizes their correction. For a single renewal process or several renewal processes operating in parallel, this paper gives an asymptotic expansion that contains in successive terms a Poisson point approximation, a generalization of the Altschul-Gish correction, and a correction term beyond that.


Author(s):  
Isakjan M. Khamdamov ◽  
Zoya S. Chay

A convex hull generated by a sample uniformly distributed on the plane is considered in the case when the support of a distribution is a convex polygon. A central limit theorem is proved for the joint distribution of the number of vertices and the area of a convex hull using the Poisson approximation of binomial point processes near the boundary of the support of distribution. Here we apply the results on the joint distribution of the number of vertices and the area of convex hulls generated by the Poisson distribution given in [6]. From the result obtained in the present paper, in particular, follow the results given in [3, 7], when the support is a convex polygon and the convex hull is generated by a homogeneous Poisson point process


2020 ◽  
Author(s):  
B Shayak ◽  
Mohit M Sharma ◽  
Anoop Misra

ABSTRACTIn this work we use mathematical modeling to describe the potential phenomena which may occur if immunity to COVID-19 lasts for a finite time instead of being permanent, i.e. if a recovered COVID-19 patient may again become susceptible to the virus after a given time interval following his/her recovery. Whether this really happens or not is unknown at the current time. If it does happen, then we find that for certain combinations of parameter values (social mobility, contact tracing, immunity threshold duration etc), the disease can keep recurring in wave after wave of outbreaks, with a periodicity approximately equal to twice the immunity threshold. Such cyclical attacks can be prevented trivially if public health interventions are strong enough to contain the disease outright. Of greater interest is the finding that should such effective interventions not prove possible, then also the second and subsequent waves can be forestalled by a consciously relaxed intervention level which finishes off the first wave before the immunity threshold is breached. Such an approach leads to higher case counts in the immediate term but significantly lower counts in the long term as well as a drastically shortened overall course of the epidemic.As we write this, there are more than 1,00,00,000 cases (at least, detected cases) and more than 5,00,000 deaths due to COVID-19 all over the globe. The unknowns surrounding this disease outnumber the knowns by orders of magnitude. One of these unknowns is how long does immunity last i.e., once a person recovers from COVID-19 infection, how long does s/he remain insusceptible to a fresh infection. Most modeling studies assume lifetime immunity, or at least sufficiently prolonged immunity as to last until the outbreak is completely over. Among the exceptions are Giordano et. al. [1] and Bjornstad et. al. [2] who account for the possibility of re-infection – while the former find no special behaviour on account of this, the latter find an oscillatory approach towards the eventual equilibrium. In an article which appeared today, Kosinski [3] has found multiple waves of COVID-19 if the immunity threshold is finite. The question of whether COVID-19 re-infection can occur is completely open as of now. A study [4] has found that for benign coronaviruses (NOT the COVID-19 pathogen!), antibodies become significantly weaker six months after the original infection, and re-infection is common from one year onwards. Although it is currently unknown whether COVID-19 re-infections can occur, the mere possibility is sufficiently frightening as to warrant a discussion of what might happen if it is true. In this Article, we use mathematical modeling to present such a discussion. Before starting off, let us declare in the clearest possible terms that this entire Article is a what-if analysis, predicated on an assumption whose veracity is not known at the current time. The contents of this Article are therefore hypothetical – as of now they are neither factual nor counter-factual.


1998 ◽  
Vol 14 (3) ◽  
pp. 276-291 ◽  
Author(s):  
James C. Martin ◽  
Douglas L. Milliken ◽  
John E. Cobb ◽  
Kevin L. McFadden ◽  
Andrew R. Coggan

This investigation sought to determine if cycling power could be accurately modeled. A mathematical model of cycling power was derived, and values for each model parameter were determined. A bicycle-mounted power measurement system was validated by comparison with a laboratory ergometer. Power was measured during road cycling, and the measured values were compared with the values predicted by the model. The measured values for power were highly correlated (R2= .97) with, and were not different than, the modeled values. The standard error between the modeled and measured power (2.7 W) was very small. The model was also used to estimate the effects of changes in several model parameters on cycling velocity. Over the range of parameter values evaluated, velocity varied linearly (R2> .99). The results demonstrated that cycling power can be accurately predicted by a mathematical model.


Author(s):  
Jeaneth Machicao ◽  
Odemir M. Bruno ◽  
Murilo S. Baptista

AbstractMotivated by today’s huge volume of data that needs to be handled in secrecy, there is a wish to develop not only fast and light but also reliably secure cryptosystems. Chaos allows for the creation of pseudo-random numbers (PRNs) by low-dimensional transformations that need to be applied only a small number of times. These two properties may translate into a chaos-based cryptosystem that is both fast (short running time) and light (little computational effort). What we propose here is an approach to generate PRNs—and consequently digital secret keys—that can serve as a seed for an enhanced chaos-based cryptosystem. We use low-dimensional chaotic maps to quickly generate PRNs that have little correlation, and then, we quickly (“fast”) enhance secrecy by several orders (“reliability”) with very little computational cost (“light”) by simply looking at the less significant digits of the initial chaotic trajectory. This paper demonstrates this idea with rigor, by showing that a transformation applied a small number of times to chaotic trajectories significantly increases its entropy and Lyapunov exponents, as a consequence of the smoothing out of the probability density towards a uniform distribution.


Sign in / Sign up

Export Citation Format

Share Document