scholarly journals On a Class of Tensor Markov Fields

Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 451
Author(s):  
Enrique Hernández-Lemus

Here, we introduce a class of Tensor Markov Fields intended as probabilistic graphical models from random variables spanned over multiplexed contexts. These fields are an extension of Markov Random Fields for tensor-valued random variables. By extending the results of Dobruschin, Hammersley and Clifford to such tensor valued fields, we proved that tensor Markov fields are indeed Gibbs fields, whenever strictly positive probability measures are considered. Hence, there is a direct relationship with many results from theoretical statistical mechanics. We showed how this class of Markov fields it can be built based on a statistical dependency structures inferred on information theoretical grounds over empirical data. Thus, aside from purely theoretical interest, the Tensor Markov Fields described here may be useful for mathematical modeling and data analysis due to their intrinsic simplicity and generality.

1998 ◽  
Vol 35 (03) ◽  
pp. 608-621
Author(s):  
Francis Comets ◽  
Martin Janžura

We prove a central limit theorem for conditionally centred random fields, under a moment condition and strict positivity of the empirical variance per observation. We use a random normalization, which fits non-stationary situations. The theorem applies directly to Markov random fields, including the cases of phase transition and lack of stationarity. One consequence is the asymptotic normality of the maximum pseudo-likelihood estimator for Markov fields in complete generality.


2021 ◽  
pp. 1-66
Author(s):  
Adam Vaccaro ◽  
Julien Emile-Geay ◽  
Dominque Guillot ◽  
Resherle Verna ◽  
Colin Morice ◽  
...  

AbstractSurface temperature is a vital metric of Earth’s climate state, but is incompletely observed in both space and time: over half of monthly values are missing from the widely used HadCRUT4.6 global surface temperature dataset. Here we apply GraphEM, a recently developed imputation method, to construct a spatially complete estimate of HadCRUT4.6 temperatures. GraphEM leverages Gaussian Markov random fields (aka Gaussian graphical models) to better estimate covariance relationships within a climate field, detecting anisotropic features such as land/ocean contrasts, orography, ocean currents and wave-propagation pathways. This detection leads to improved estimates of missing values compared to methods (such as kriging) that assume isotropic covariance relationships, as we show with real and synthetic data.This interpolated analysis of HadCRUT4.6 data is available as a 100-member ensemble, propagating information about sampling variability available from the original HadCRUT4.6 dataset. A comparison of NINO3.4 and global mean monthly temperature series with published datasets reveals similarities and differences due in part to the spatial interpolation method. Notably, the GraphEM-completed HadCRUT4.6 global temperature displays a stronger early twenty-first century warming trend than its uninterpolated counterpart, consistent with recent analyses using other datasets. Known events like the 1877/1878 El Niño are recovered with greater fidelity than with kriging, and result in different assessments of changes in ENSO variability through time. Gaussian Markov random fields provide a more geophysically-motivated way to impute missing values in climate fields, and the associated graph provides a powerful tool to analyze the structure of teleconnection patterns. We close with a discussion of wider applications of Markov random fields in climate science.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1311
Author(s):  
Sangkyun Lee ◽  
Piotr Sobczyk ◽  
Malgorzata Bogdan

In this paper, we propose a new estimation procedure for discovering the structure of Gaussian Markov random fields (MRFs) with false discovery rate (FDR) control, making use of the sorted ℓ 1 -norm (SL1) regularization. A Gaussian MRF is an acyclic graph representing a multivariate Gaussian distribution, where nodes are random variables and edges represent the conditional dependence between the connected nodes. Since it is possible to learn the edge structure of Gaussian MRFs directly from data, Gaussian MRFs provide an excellent way to understand complex data by revealing the dependence structure among many inputs features, such as genes, sensors, users, documents, etc. In learning the graphical structure of Gaussian MRFs, it is desired to discover the actual edges of the underlying but unknown probabilistic graphical model—it becomes more complicated when the number of random variables (features) p increases, compared to the number of data points n. In particular, when p ≫ n , it is statistically unavoidable for any estimation procedure to include false edges. Therefore, there have been many trials to reduce the false detection of edges, in particular, using different types of regularization on the learning parameters. Our method makes use of the SL1 regularization, introduced recently for model selection in linear regression. We focus on the benefit of SL1 regularization that it can be used to control the FDR of detecting important random variables. Adapting SL1 for probabilistic graphical models, we show that SL1 can be used for the structure learning of Gaussian MRFs using our suggested procedure nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation), controlling the FDR of detecting edges.


1979 ◽  
Vol 16 (03) ◽  
pp. 554-566 ◽  
Author(s):  
Roy Saunders ◽  
Richard J. Kryscio ◽  
Gerald M. Funk

In this article we give limiting results for arrays {Xij (m, n) (i, j) Dmn } of binary random variables distributed as particular types of Markov random fields over m x n rectangular lattices Dmn. Under some sparseness conditions which restrict the number of X ij (m, n)'s which are equal to one we show that the random variables (l = 1, ···, r) converge to independent Poisson random variables for 0 < d1 < d2 < · ·· < dr when m→∞ nd∞. The particular types of Markov random fields considered here provide clustering (or repulsion) alternatives to randomness and involve several parameters. The limiting results are used to consider statistical inference for these parameters. Finally, a simulation study is presented which examines the adequacy of the Poisson approximation and the inference techniques when the lattice dimensions are only moderately large.


1979 ◽  
Vol 16 (3) ◽  
pp. 554-566 ◽  
Author(s):  
Roy Saunders ◽  
Richard J. Kryscio ◽  
Gerald M. Funk

In this article we give limiting results for arrays {Xij (m, n) (i, j) Dmn} of binary random variables distributed as particular types of Markov random fields over m x n rectangular lattices Dmn. Under some sparseness conditions which restrict the number of Xij (m, n)'s which are equal to one we show that the random variables (l = 1, ···, r) converge to independent Poisson random variables for 0 < d1 < d2 < · ·· < dr when m→∞ nd∞. The particular types of Markov random fields considered here provide clustering (or repulsion) alternatives to randomness and involve several parameters. The limiting results are used to consider statistical inference for these parameters. Finally, a simulation study is presented which examines the adequacy of the Poisson approximation and the inference techniques when the lattice dimensions are only moderately large.


Author(s):  
Sriram Srinivasan ◽  
Behrouz Babaki ◽  
Golnoosh Farnadi ◽  
Lise Getoor

Statistical relational learning models are powerful tools that combine ideas from first-order logic with probabilistic graphical models to represent complex dependencies. Despite their success in encoding large problems with a compact set of weighted rules, performing inference over these models is often challenging. In this paper, we show how to effectively combine two powerful ideas for scaling inference for large graphical models. The first idea, lifted inference, is a wellstudied approach to speeding up inference in graphical models by exploiting symmetries in the underlying problem. The second idea is to frame Maximum a posteriori (MAP) inference as a convex optimization problem and use alternating direction method of multipliers (ADMM) to solve the problem in parallel. A well-studied relaxation to the combinatorial optimization problem defined for logical Markov random fields gives rise to a hinge-loss Markov random field (HLMRF) for which MAP inference is a convex optimization problem. We show how the formalism introduced for coloring weighted bipartite graphs using a color refinement algorithm can be integrated with the ADMM optimization technique to take advantage of the sparse dependency structures of HLMRFs. Our proposed approach, lifted hinge-loss Markov random fields (LHL-MRFs), preserves the structure of the original problem after lifting and solves lifted inference as distributed convex optimization with ADMM. In our empirical evaluation on real-world problems, we observe up to a three times speed up in inference over HL-MRFs.


2013 ◽  
Vol 142 (1) ◽  
pp. 227-242 ◽  
Author(s):  
Nishant Chandgotia ◽  
Guangyue Han ◽  
Brian Marcus ◽  
Tom Meyerovitch ◽  
Ronnie Pavlov

Author(s):  
You Lu ◽  
Zhiyuan Liu ◽  
Bert Huang

Traditional learning methods for training Markov random fields require doing inference over all variables to compute the likelihood gradient. The iteration complexity for those methods therefore scales with the size of the graphical models. In this paper, we propose block belief propagation learning (BBPL), which uses block-coordinate updates of approximate marginals to compute approximate gradients, removing the need to compute inference on the entire graphical model. Thus, the iteration complexity of BBPL does not scale with the size of the graphs. We prove that the method converges to the same solution as that obtained by using full inference per iteration, despite these approximations, and we empirically demonstrate its scalability improvements over standard training methods.


1998 ◽  
Vol 35 (3) ◽  
pp. 608-621 ◽  
Author(s):  
Francis Comets ◽  
Martin Janžura

We prove a central limit theorem for conditionally centred random fields, under a moment condition and strict positivity of the empirical variance per observation. We use a random normalization, which fits non-stationary situations. The theorem applies directly to Markov random fields, including the cases of phase transition and lack of stationarity. One consequence is the asymptotic normality of the maximum pseudo-likelihood estimator for Markov fields in complete generality.


Sign in / Sign up

Export Citation Format

Share Document