stochastic point process
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 9)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Martin Alan Wortman

Core Damage Frequency (CDF) is a risk metric often employed by nuclear regulatory bodies worldwide. Numerical values for this metric are required by U.S. regulators, prior to reactor licensing, and reported values can trigger regulatory inspections. CDF is reported as a constant, sometimes accompanied by a confidence interval. It is well understood that CDF characterizes the arrival rate of a stochastic point process modeling core damage events. However, consequences of the assumptions imposed on this stochastic process as a computational necessity are often overlooked. Herein, we revisit CDF in the context of modern point process theory. We will argue that the assumptions required to yield a constant CDF are typically unrealistic. We will further argue that treating CDF as an informative approximation is suspect, because of the inherent difficulties in quantifying its quality as an approximation.


2021 ◽  
Author(s):  
Saleh Fayyaz ◽  
MohammadAmin Fakharian ◽  
Ali Ghazizadeh

Stimulus presentation is believed to quench neural response variability as measured by fano-factor (FF). However, the relative contribution of within trial spike irregularity (nψ) and trial to trial rate variability (nRV) to FF reduction has remained elusive. Here, we introduce a principled approach for accurate estimation of variability components for a doubly stochastic point process which unlike previous methods allows for a time varying nψ (aka φ). Notably, analysis across multiple subcortical and cortical areas showed across the board reduction in rate variability. However, unlike what was previously thought, spiking irregularity was not constant in time and was even enhanced in some regions abating the quench in the post-stimulus FF. Simulations confirmed plausibility of a time varying nψ arising from within and between pool correlations of excitatory and inhibitory neural inputs. By accurate parsing of neural variability, our approach constrains candidate mechanisms that give rise to observed rate variability and spiking irregularity within brain regions.


Author(s):  
Nicoletta D’Angelo ◽  
Mauro Ferrante ◽  
Antonino Abbruzzo ◽  
Giada Adelfio

This paper aims at analyzing the spatial intensity in the distribution of stop locations of cruise passengers during their visit at the destination through a stochastic point process modelling approach on a linear network. Data collected through the integration of GPS tracking technology and questionnaire-based survey on cruise passengers visiting the city of Palermo are used, to identify the main determinants which characterize their stop locations pattern. The spatial intensity of stop locations is estimated through a Gibbs point process model, taking into account for both individual-related variables, contextual-level information, and for spatial interaction among stop points. The Berman-Turner device for maximum pseudolikelihood is considered, by using a quadrature scheme generated on the network. The approach used allows taking into account the linear network determined by the street configuration of the destination under analysis. The results show an influence of both socio-demographic and trip-related characteristics on the stop location patterns, as well as the relevance of distance from the main attractions, and potential interactions among cruise passengers in stop configuration. The proposed approach represents both improvements from the methodological perspective, related to the modelling of spatial point process on a linear network, and from the applied perspective, given that better knowledge of the determinants of spatial intensity of visitors’ stop locations in urban contexts may orient destination management policy.


2020 ◽  
Author(s):  
Gang Xie

Abstract The coronavirus disease 2019 (COVID-19) has now spread throughout most countries in the world causing heavy life losses and damaging social-economic impacts. Following a stochastic point process modelling approach, a Monte Carlo simulation model was developed to represent the COVID-19 spread dynamics. First the simulation study was to examine various expected properties of the simulation model performance based on a number of arbitrarily defined scenarios. Then the simulation studies were performed in analysis of the real COVID-19 data reported for Australia and United Kingdom (UK). Given the initial number of active cases before 1 March were around 10 for both countries, the model estimated that the number of active COVID-19 cases was to peak around 30 March in Australia (≈ 1630 cases) and around 11 April in UK (≈ 24600 cases); ultimately the total confirmed cases could sum to 6610 for Australia in about 70 days and 136000 for UK in about 90 days. The analysis results also confirmed the reproduction number ranges as reported in the literature. This simulation model was considered as an effective and adaptable decision making/what-if analysis tool in battling COVID-19 in the immediate need, and in battling any other infectious diseases in the future.


2020 ◽  
Vol 34 (01) ◽  
pp. 173-180
Author(s):  
Zhen Pan ◽  
Zhenya Huang ◽  
Defu Lian ◽  
Enhong Chen

Many events occur in real-world and social networks. Events are related to the past and there are patterns in the evolution of event sequences. Understanding the patterns can help us better predict the type and arriving time of the next event. In the literature, both feature-based approaches and generative approaches are utilized to model the event sequence. Feature-based approaches extract a variety of features, and train a regression or classification model to make a prediction. Yet, their performance is dependent on the experience-based feature exaction. Generative approaches usually assume the evolution of events follow a stochastic point process (e.g., Poisson process or its complexer variants). However, the true distribution of events is never known and the performance depends on the design of stochastic process in practice. To solve the above challenges, in this paper, we present a novel probabilistic generative model for event sequences. The model is termed Variational Event Point Process (VEPP). Our model introduces variational auto-encoder to event sequence modeling that can better use the latent information and capture the distribution over inter-arrival time and types of event sequences. Experiments on real-world datasets prove effectiveness of our proposed model.


2019 ◽  
Vol 1 (12) ◽  
Author(s):  
Cássia Vanessa Albuquerque de Melo ◽  
Paulo César Correia Gomes ◽  
Catarina Nogueira de Araújo Fernandes ◽  
William Wagner Matos Lira

2019 ◽  
Vol 124 (11) ◽  
pp. 2475-2490
Author(s):  
A. Sochan ◽  
Z. A. Łagodowski ◽  
E. Nieznaj ◽  
M. Beczek ◽  
M. Ryzak ◽  
...  

2013 ◽  
Vol 17 (12) ◽  
pp. 5167-5183 ◽  
Author(s):  
M. T. Pham ◽  
W. J. Vanhaute ◽  
S. Vandenberghe ◽  
B. De Baets ◽  
N. E. C. Verhoest

Abstract. Of all natural disasters, the economic and environmental consequences of droughts are among the highest because of their longevity and widespread spatial extent. Because of their extreme behaviour, studying droughts generally requires long time series of historical climate data. Rainfall is a very important variable for calculating drought statistics, for quantifying historical droughts or for assessing the impact on other hydrological (e.g. water stage in rivers) or agricultural (e.g. irrigation requirements) variables. Unfortunately, time series of historical observations are often too short for such assessments. To circumvent this, one may rely on the synthetic rainfall time series from stochastic point process rainfall models, such as Bartlett–Lewis models. The present study investigates whether drought statistics are preserved when simulating rainfall with Bartlett–Lewis models. Therefore, a 105 yr 10 min rainfall time series obtained at Uccle, Belgium is used as a test case. First, drought events were identified on the basis of the Effective Drought Index (EDI), and each event was characterized by two variables, i.e. drought duration (D) and drought severity (S). As both parameters are interdependent, a multivariate distribution function, which makes use of a copula, was fitted. Based on the copula, four types of drought return periods are calculated for observed as well as simulated droughts and are used to evaluate the ability of the rainfall models to simulate drought events with the appropriate characteristics. Overall, all Bartlett–Lewis model types studied fail to preserve extreme drought statistics, which is attributed to the model structure and to the model stationarity caused by maintaining the same parameter set during the whole simulation period.


Sign in / Sign up

Export Citation Format

Share Document