pseudo likelihood
Recently Published Documents


TOTAL DOCUMENTS

180
(FIVE YEARS 38)

H-INDEX

18
(FIVE YEARS 2)

2021 ◽  
pp. 65-73
Author(s):  
Xudong Dong ◽  
Xiaofei Zhang ◽  
Jun Zhao ◽  
Meng Sun ◽  
Jianfeng Li

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Aurelien Decelle ◽  
Sungmin Hwang ◽  
Jacopo Rocchi ◽  
Daniele Tantari

AbstractWe propose an efficient algorithm to solve inverse problems in the presence of binary clustered datasets. We consider the paradigmatic Hopfield model in a teacher student scenario, where this situation is found in the retrieval phase. This problem has been widely analyzed through various methods such as mean-field approaches or the pseudo-likelihood optimization. Our approach is based on the estimation of the posterior using the Thouless–Anderson–Palmer (TAP) equations in a parallel updating scheme. Unlike other methods, it allows to retrieve the original patterns of the teacher dataset and thanks to the parallel update it can be applied to large system sizes. We tackle the same problem using a restricted Boltzmann machine (RBM) and discuss analogies and differences between our algorithm and RBM learning.


2021 ◽  
Vol 5 (3) ◽  
pp. 129
Author(s):  
Guofei Pang ◽  
Wanrong Cao

Although stochastic fractional partial differential equations have received increasing attention in the last decade, the parameter estimation of these equations has been seldom reported in literature. In this paper, we propose a pseudo-likelihood approach to estimating the parameters of stochastic time-fractional diffusion equations, whose forward solver has been investigated very recently by Gunzburger, Li, and Wang (2019). Our approach can accurately recover the fractional order, diffusion coefficient, as well as noise magnitude given the discrete observation data corresponding to only one realization of driving noise. When only partial data is available, our approach can also attain acceptable results for intermediate sparsity of observation.


2021 ◽  
Vol 31 (6) ◽  
Author(s):  
Kimmo Suotsalo ◽  
Yingying Xu ◽  
Jukka Corander ◽  
Johan Pensar

AbstractLearning vector autoregressive models from multivariate time series is conventionally approached through least squares or maximum likelihood estimation. These methods typically assume a fully connected model which provides no direct insight to the model structure and may lead to highly noisy estimates of the parameters. Because of these limitations, there has been an increasing interest towards methods that produce sparse estimates through penalized regression. However, such methods are computationally intensive and may become prohibitively time-consuming when the number of variables in the model increases. In this paper we adopt an approximate Bayesian approach to the learning problem by combining fractional marginal likelihood and pseudo-likelihood. We propose a novel method, PLVAR, that is both faster and produces more accurate estimates than the state-of-the-art methods based on penalized regression. We prove the consistency of the PLVAR estimator and demonstrate the attractive performance of the method on both simulated and real-world data.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254103
Author(s):  
Daniele de Brito Trindade ◽  
Patrícia Leone Espinheira ◽  
Klaus Leite Pinto Vasconcellos ◽  
Jalmar Manuel Farfán Carrasco ◽  
Maria do Carmo Soares de Lima

We propose in this paper a general class of nonlinear beta regression models with measurement errors. The motivation for proposing this model arose from a real problem we shall discuss here. The application concerns a usual oil refinery process where the main covariate is the concentration of a typically measured in error reagent and the response is a catalyst’s percentage of crystallinity involved in the process. Such data have been modeled by nonlinear beta and simplex regression models. Here we propose a nonlinear beta model with the possibility of the chemical reagent concentration being measured with error. The model parameters are estimated by different methods. We perform Monte Carlo simulations aiming to evaluate the performance of point and interval estimators of the model parameters. Both results of simulations and the application favors the method of estimation by maximum pseudo-likelihood approximation.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Joseph M. Lukens ◽  
Kody J. H. Law ◽  
Ryan S. Bennink

AbstractThe method of classical shadows proposed by Huang, Kueng, and Preskill heralds remarkable opportunities for quantum estimation with limited measurements. Yet its relationship to established quantum tomographic approaches, particularly those based on likelihood models, remains unclear. In this article, we investigate classical shadows through the lens of Bayesian mean estimation (BME). In direct tests on numerical data, BME is found to attain significantly lower error on average, but classical shadows prove remarkably more accurate in specific situations—such as high-fidelity ground truth states—which are improbable in a fully uniform Hilbert space. We then introduce an observable-oriented pseudo-likelihood that successfully emulates the dimension-independence and state-specific optimality of classical shadows, but within a Bayesian framework that ensures only physical states. Our research reveals how classical shadows effect important departures from conventional thinking in quantum state estimation, as well as the utility of Bayesian methods for uncovering and formalizing statistical assumptions.


Author(s):  
Harald Hruschka

AbstractWe introduce the conditional restricted Boltzmann machine as method to analyze brand-level market basket data of individual households. The conditional restricted Boltzmann machine includes marketing variables and household attributes as independent variables. To our knowledge this is the first study comparing the conditional restricted Boltzmann machine to homogeneous and heterogeneous multivariate logit models for brand-level market basket data across several product categories. We explain how to estimate the conditional restricted Boltzmann machine starting from a restricted Boltzmann machine without independent variables. The conditional restricted Boltzmann machine turns out to excel all the other investigated models in terms of log pseudo-likelihood for holdout data. We interpret the selected conditional restricted Boltzmann machine based on coefficients linking purchases to hidden variables, interdependences between brand pairs as well as own and cross effects of marketing variables. The conditional restricted Boltzmann machine indicates pairwise relationships between brands that are more varied than those of the multivariate logit model are. Based on the pairwise interdependences inferred from the restricted Boltzmann machine we determine the competitive structure of brands by means of cluster analysis. Using counterfactual simulations, we investigate what three different models (independent logit, heterogeneous multivariate logit, conditional restricted Boltzmann machine) imply with respect to the retailer’s revenue if each brand is put on display. Finally, we mention possibilities for further research, such as applying the conditional restricted Boltzmann machine to other areas in marketing or retailing.


Sign in / Sign up

Export Citation Format

Share Document