cox process
Recently Published Documents


TOTAL DOCUMENTS

66
(FIVE YEARS 20)

H-INDEX

9
(FIVE YEARS 1)

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260051
Author(s):  
Glenna Nightingale ◽  
Megan Laxton ◽  
Janine B. Illian

Objectives To model the risk of COVID-19 mortality in British care homes conditional on the community level risk. Methods A two stage modeling process (“doubly latent”) which includes a Besag York Mollie model (BYM) and a Log Gaussian Cox Process. The BYM is adopted so as to estimate the community level risks. These are incorporated in the Log Gaussian Cox Process to estimate the impact of these risks on that in care homes. Results For an increase in the risk at the community level, the number of COVID-19 related deaths in the associated care home would be increased by exp (0.833), 2. This is based on a simulated dataset. In the context of COVID-19 related deaths, this study has illustrated the estimation of the risk to care homes in the presence of background community risk. This approach will be useful in facilitating the identification of the most vulnerable care homes and in predicting risk to new care homes. Conclusions The modeling of two latent processes have been shown to be successfully facilitated by the use of the BYM and Log Gaussian Cox Process Models. Community COVID-19 risks impact on that of the care homes embedded in these communities.


2021 ◽  
pp. 101518
Author(s):  
Abdel Karim Ajami ◽  
Hussein Ammar ◽  
Hassan Artail
Keyword(s):  

Author(s):  
Mikko Kuronen ◽  
Aila Särkkä ◽  
Matti Vihola ◽  
Mari Myllymäki

AbstractWe propose a hierarchical log Gaussian Cox process (LGCP) for point patterns, where a set of points $$\varvec{x}$$ x affects another set of points $$\varvec{y}$$ y but not vice versa. We use the model to investigate the effect of large trees on the locations of seedlings. In the model, every point in $$\varvec{x}$$ x has a parametric influence kernel or signal, which together form an influence field. Conditionally on the parameters, the influence field acts as a spatial covariate in the intensity of the model, and the intensity itself is a non-linear function of the parameters. Points outside the observation window may affect the influence field inside the window. We propose an edge correction to account for this missing data. The parameters of the model are estimated in a Bayesian framework using Markov chain Monte Carlo where a Laplace approximation is used for the Gaussian field of the LGCP model. The proposed model is used to analyze the effect of large trees on the success of regeneration in uneven-aged forest stands in Finland.


2021 ◽  
pp. 100509
Author(s):  
Patrick E. Brown ◽  
Jamie Stafford
Keyword(s):  

2020 ◽  
Vol 9 (12) ◽  
pp. 2121-2125
Author(s):  
Vishnu Vardhan Chetlur ◽  
Harpreet S. Dhillon

2020 ◽  
Vol 181 (6) ◽  
pp. 2109-2130 ◽  
Author(s):  
Vishnu Vardhan Chetlur ◽  
Harpreet S. Dhillon ◽  
Carl P. Dettmann

2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Devan G. Becker ◽  
Douglas G. Woolford ◽  
Charmaine B. Dean

AbstractSpatial point processes have been successfully used to model the relative efficiency of shot locations for each player in professional basketball games. Those analyses were possible because each player makes enough baskets to reliably fit a point process model. Goals in hockey are rare enough that a point process cannot be fit to each player’s goal locations, so novel techniques are needed to obtain measures of shot efficiency for each player. A Log-Gaussian Cox Process (LGCP) is used to model all shot locations, including goals, of each NHL player who took at least 500 shots during the 2011–2018 seasons. Each player’s LGCP surface is treated as an image and these images are then used in an unsupervised statistical learning algorithm that decomposes the pictures into a linear combination of spatial basis functions. The coefficients of these basis functions are shown to be a very useful tool to compare players. To incorporate goals, the locations of all shots that resulted in a goal are treated as a “perfect player” and used in the same algorithm (goals are further split into perfect forwards, perfect centres and perfect defence). These perfect players are compared to other players as a measure of shot efficiency. This analysis provides a map of common shooting locations, identifies regions with the most goals relative to the number of shots and demonstrates how each player’s shot location differs from scoring locations.


2020 ◽  
pp. 1-22
Author(s):  
Jiwook Jang ◽  
Rosy Oh

Abstract The Poisson process is an essential building block to move up to complicated counting processes, such as the Cox (“doubly stochastic Poisson”) process, the Hawkes (“self-exciting”) process, exponentially decaying shot-noise Poisson (simply “shot-noise Poisson”) process and the dynamic contagion process. The Cox process provides flexibility by letting the intensity not only depending on time but also allowing it to be a stochastic process. The Hawkes process has self-exciting property and clustering effects. Shot-noise Poisson process is an extension of the Poisson process, where it is capable of displaying the frequency, magnitude and time period needed to determine the effect of points. The dynamic contagion process is a point process, where its intensity generalises the Hawkes process and Cox process with exponentially decaying shot-noise intensity. To facilitate the usage of these processes in practice, we revisit the distributional properties of the Poisson, Cox, Hawkes, shot-noise Poisson and dynamic contagion process and their compound processes. We provide simulation algorithms for these processes, which would be useful to statistical analysis, further business applications and research. As an application of the compound processes, numerical comparisons of value-at-risk and tail conditional expectation are made.


Sign in / Sign up

Export Citation Format

Share Document