gaussian markov random fields
Recently Published Documents


TOTAL DOCUMENTS

139
(FIVE YEARS 31)

H-INDEX

20
(FIVE YEARS 1)

2021 ◽  
Author(s):  
◽  
Lindsay Morris

<p>In order to carry out assessment of marine stock levels, an accurate estimate of the current year's population abundance must be formulated. Standardized catch per unit of effort (CPUE) values are, in theory, proportional to population abundance. However, this only holds if the species catchability is constant over time. In almost all cases it is not, due to the existence of spatial and temporal variation. In this thesis, we fit various models to test different combinations and structures of spatial and temporal autocorrelation within hoki (Macruronus novaezelandiae) CPUE. A Bayesian approach was taken, and the spatial and temporal components were modelled using Gaussian Markov random fields. The data was collected from summer research trawl surveys carried out by the National Institute of Water and Atmospheric Research (NIWA) and the Ministry for Primary Industries (MPI). It allowed us to model spatial distribution using both areal and point reference approaches. To fit the models, we used the software Stan (Gelman et al., 2015) which implements Hamiltonian Monte Carlo. Model comparison was carried out using the Watanabe-Akaike information criterion (WAIC, (Watanabe, 2010)). We found that trawl year was the most important factor to explain variation in research survey hoki CPUE. Furthermore, the areal approach provided better indices of abundance than the point reference approach.</p>


2021 ◽  
Author(s):  
◽  
Lindsay Morris

<p>In order to carry out assessment of marine stock levels, an accurate estimate of the current year's population abundance must be formulated. Standardized catch per unit of effort (CPUE) values are, in theory, proportional to population abundance. However, this only holds if the species catchability is constant over time. In almost all cases it is not, due to the existence of spatial and temporal variation. In this thesis, we fit various models to test different combinations and structures of spatial and temporal autocorrelation within hoki (Macruronus novaezelandiae) CPUE. A Bayesian approach was taken, and the spatial and temporal components were modelled using Gaussian Markov random fields. The data was collected from summer research trawl surveys carried out by the National Institute of Water and Atmospheric Research (NIWA) and the Ministry for Primary Industries (MPI). It allowed us to model spatial distribution using both areal and point reference approaches. To fit the models, we used the software Stan (Gelman et al., 2015) which implements Hamiltonian Monte Carlo. Model comparison was carried out using the Watanabe-Akaike information criterion (WAIC, (Watanabe, 2010)). We found that trawl year was the most important factor to explain variation in research survey hoki CPUE. Furthermore, the areal approach provided better indices of abundance than the point reference approach.</p>


2021 ◽  
pp. 118693
Author(s):  
Ian Hough ◽  
Ron Sarafian ◽  
Alexandra Shtein ◽  
Bin Zhou ◽  
Johanna Lepeule ◽  
...  

2021 ◽  
pp. 1-66
Author(s):  
Adam Vaccaro ◽  
Julien Emile-Geay ◽  
Dominque Guillot ◽  
Resherle Verna ◽  
Colin Morice ◽  
...  

AbstractSurface temperature is a vital metric of Earth’s climate state, but is incompletely observed in both space and time: over half of monthly values are missing from the widely used HadCRUT4.6 global surface temperature dataset. Here we apply GraphEM, a recently developed imputation method, to construct a spatially complete estimate of HadCRUT4.6 temperatures. GraphEM leverages Gaussian Markov random fields (aka Gaussian graphical models) to better estimate covariance relationships within a climate field, detecting anisotropic features such as land/ocean contrasts, orography, ocean currents and wave-propagation pathways. This detection leads to improved estimates of missing values compared to methods (such as kriging) that assume isotropic covariance relationships, as we show with real and synthetic data.This interpolated analysis of HadCRUT4.6 data is available as a 100-member ensemble, propagating information about sampling variability available from the original HadCRUT4.6 dataset. A comparison of NINO3.4 and global mean monthly temperature series with published datasets reveals similarities and differences due in part to the spatial interpolation method. Notably, the GraphEM-completed HadCRUT4.6 global temperature displays a stronger early twenty-first century warming trend than its uninterpolated counterpart, consistent with recent analyses using other datasets. Known events like the 1877/1878 El Niño are recovered with greater fidelity than with kriging, and result in different assessments of changes in ENSO variability through time. Gaussian Markov random fields provide a more geophysically-motivated way to impute missing values in climate fields, and the associated graph provides a powerful tool to analyze the structure of teleconnection patterns. We close with a discussion of wider applications of Markov random fields in climate science.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Marie Turčičová ◽  
Jan Mandel ◽  
Kryštof Eben

<p style='text-indent:20px;'>We present an ensemble filtering method based on a linear model for the precision matrix (the inverse of the covariance) with the parameters determined by Score Matching Estimation. The method provides a rigorous covariance regularization when the underlying random field is Gaussian Markov. The parameters are found by solving a system of linear equations. The analysis step uses the inverse formulation of the Kalman update. Several filter versions, differing in the construction of the analysis ensemble, are proposed, as well as a Score matching version of the Extended Kalman Filter.</p>


Author(s):  
Mark Semelhago ◽  
Barry L. Nelson ◽  
Eunhye Song ◽  
Andreas Wächter

Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. The result is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study. Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy. Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.


Sign in / Sign up

Export Citation Format

Share Document