risk bounds
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 17)

H-INDEX

13
(FIVE YEARS 2)

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 313
Author(s):  
Imon Banerjee ◽  
Vinayak A. Rao ◽  
Harsha Honnappa

Datasets displaying temporal dependencies abound in science and engineering applications, with Markov models representing a simplified and popular view of the temporal dependence structure. In this paper, we consider Bayesian settings that place prior distributions over the parameters of the transition kernel of a Markov model, and seek to characterize the resulting, typically intractable, posterior distributions. We present a Probably Approximately Correct (PAC)-Bayesian analysis of variational Bayes (VB) approximations to tempered Bayesian posterior distributions, bounding the model risk of the VB approximations. Tempered posteriors are known to be robust to model misspecification, and their variational approximations do not suffer the usual problems of over confident approximations. Our results tie the risk bounds to the mixing and ergodic properties of the Markov data generating model. We illustrate the PAC-Bayes bounds through a number of example Markov models, and also consider the situation where the Markov model is misspecified.


2020 ◽  
Vol 69 ◽  
pp. 733-764
Author(s):  
Ata Kaban ◽  
Robert J. Durrant

We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger than the sample size, using a notion of compressibility by random projection. In particular, we give upper bounds for the empirical risk minimizer learned efficiently from randomly projected data, as well as uniform upper bounds in the full high-dimensional space. Our main findings are the following: i) In both settings, the obtained bounds are able to discover and take advantage of benign geometric structure, which turns out to depend on the cosine similarities between the classifier and points of the input space, and provide a new interpretation of margin distribution type arguments. ii) Furthermore our bounds allow us to draw new connections between several existing successful classification algorithms, and we also demonstrate that our theory is predictive of empirically observed performance in numerical simulations and experiments. iii) Taken together, these results suggest that the study of compressive learning can improve our understanding of which benign structural traits – if they are possessed by the data generator – make it easier to learn an effective classifier from a sample.


2020 ◽  
Vol 94 ◽  
pp. 9-24
Author(s):  
Carole Bernard ◽  
Rodrigue Kazzi ◽  
Steven Vanduffel

Author(s):  
Weihua Zhao ◽  
Xiaoyu Zhang ◽  
Heng Lian

We focus on regression problems in which the predictors are naturally in the form of matrices. Reduced rank regression and related regularized method have been adapted to matrix regression. However, linear methods are restrictive in their expressive power. In this work, we consider a class of semiparametric additive models based on series estimation of nonlinear functions which interestingly induces a problem of 3rd order tensor regression with transformed predictors. Risk bounds for the estimator are derived and some simulation results are presented to illustrate the performances of the proposed method.


2020 ◽  
Vol 48 (1) ◽  
pp. 205-229 ◽  
Author(s):  
Adityanand Guntuboyina ◽  
Donovan Lieu ◽  
Sabyasachi Chatterjee ◽  
Bodhisattva Sen

2020 ◽  
Vol 32 (2) ◽  
pp. 447-484
Author(s):  
Kishan Wimalawarne ◽  
Makoto Yamada ◽  
Hiroshi Mamitsuka

Recently, a set of tensor norms known as coupled norms has been proposed as a convex solution to coupled tensor completion. Coupled norms have been designed by combining low-rank inducing tensor norms with the matrix trace norm. Though coupled norms have shown good performances, they have two major limitations: they do not have a method to control the regularization of coupled modes and uncoupled modes, and they are not optimal for couplings among higher-order tensors. In this letter, we propose a method that scales the regularization of coupled components against uncoupled components to properly induce the low-rankness on the coupled mode. We also propose coupled norms for higher-order tensors by combining the square norm to coupled norms. Using the excess risk-bound analysis, we demonstrate that our proposed methods lead to lower risk bounds compared to existing coupled norms. We demonstrate the robustness of our methods through simulation and real-data experiments.


Sign in / Sign up

Export Citation Format

Share Document