convex loss
Recently Published Documents


TOTAL DOCUMENTS

69
(FIVE YEARS 2)

H-INDEX

7
(FIVE YEARS 0)

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 178
Author(s):  
Hossein Taheri ◽  
Ramtin Pedarsani ◽  
Christos Thrampoulidis

We study convex empirical risk minimization for high-dimensional inference in binary linear classification under both discriminative binary linear models, as well as generative Gaussian-mixture models. Our first result sharply predicts the statistical performance of such estimators in the proportional asymptotic regime under isotropic Gaussian features. Importantly, the predictions hold for a wide class of convex loss functions, which we exploit to prove bounds on the best achievable performance. Notably, we show that the proposed bounds are tight for popular binary models (such as signed and logistic) and for the Gaussian-mixture model by constructing appropriate loss functions that achieve it. Our numerical simulations suggest that the theory is accurate even for relatively small problem dimensions and that it enjoys a certain universality property.



2020 ◽  
Vol 157 ◽  
pp. 590-601 ◽  
Author(s):  
Alessandro Buccini ◽  
Omar De la Cruz Cabrera ◽  
Marco Donatelli ◽  
Andrea Martinelli ◽  
Lothar Reichel
Keyword(s):  


2020 ◽  
Vol 75 (1) ◽  
pp. 34-45
Author(s):  
Yunnan Xu ◽  
Pang Du ◽  
Ryan Senger ◽  
John Robertson ◽  
James L. Pirkle

A critical step in Raman spectroscopy is baseline correction. This procedure eliminates the background signals generated by residual Rayleigh scattering or fluorescence. Baseline correction procedures relying on asymmetric loss functions have been employed recently. They operate with a reduced penalty on positive spectral deviations that essentially push down the baseline estimates from invading Raman peak areas. However, their coupling with polynomial fitting may not be suitable over the whole spectral domain and can yield inconsistent baselines. Their requirement of the specification of a threshold and the non-convexity of the corresponding objective function further complicates the computation. Learning from their pros and cons, we have developed a novel baseline correction procedure called the iterative smoothing-splines with root error adjustment (ISREA) that has three distinct advantages. First, ISREA uses smoothing splines to estimate the baseline that are more flexible than polynomials and capable of capturing complicated trends over the whole spectral domain. Second, ISREA mimics the asymmetric square root loss and removes the need of a threshold. Finally, ISREA avoids the direct optimization of a non-convex loss function by iteratively updating prediction errors and refitting baselines. Through our extensive numerical experiments on a wide variety of spectra including simulated spectra, mineral spectra, and dialysate spectra, we show that ISREA is simple, fast, and can yield consistent and accurate baselines that preserve all the meaningful Raman peaks.



2020 ◽  
Vol 357 (11) ◽  
pp. 7069-7091
Author(s):  
Kuaini Wang ◽  
Huimin Pei ◽  
Jinde Cao ◽  
Ping Zhong


Author(s):  
Raman Sankaran ◽  
Francis Bach ◽  
Chiranjib Bhattacharyya

Subquadratic norms have been studied recently in the context of structured sparsity, which has been shown to be more beneficial than conventional regularizers in applications such as image denoising, compressed sensing, banded covariance estimation, etc. While existing works have been successful in learning structured sparse models such as trees, graphs, their associated optimization procedures have been inefficient because of hard-to-evaluate proximal operators of the norms. In this paper, we study the computational aspects of learning with subquadratic norms in a general setup. Our main contributions are two proximal-operator based algorithms ADMM-η and CP-η, which generically apply to these learning problems with convex loss functions, and achieve a proven rate of convergence of O(1/T) after T iterations. These algorithms are derived in a primal-dual framework, which have not been examined for subquadratic norms. We illustrate the efficiency of the algorithms developed in the context of tree-structured sparsity, where they comprehensively outperform relevant baselines.



Author(s):  
Xiao Zhang ◽  
Shizhong Liao

Online kernel selection in continuous kernel space is more complex than that in discrete kernel set. But existing online kernel selection approaches for continuous kernel spaces have linear computational complexities at each round with respect to the current number of rounds and lack sublinear regret guarantees due to the continuously many candidate kernels. To address these issues, we propose a novel hypothesis sketching approach to online kernel selection in continuous kernel space, which has constant computational complexities at each round and enjoys a sublinear regret bound. The main idea of the proposed hypothesis sketching approach is to maintain the orthogonality of the basis functions and the prediction accuracy of the hypothesis sketches in a time-varying reproducing kernel Hilbert space. We first present an efficient dependency condition to maintain the basis functions of the hypothesis sketches under a computational budget. Then we update the weights and the optimal kernels by minimizing the instantaneous loss of the hypothesis sketches using the online gradient descent with a compensation strategy. We prove that the proposed hypothesis sketching approach enjoys a regret bound of order O(√T) for online kernel selection in continuous kernel space, which is optimal for convex loss functions, where T is the number of rounds, and reduces the computational complexities at each round from linear to constant with respect to the number of rounds. Experimental results demonstrate that the proposed hypothesis sketching approach significantly improves the efficiency of online kernel selection in continuous kernel space while retaining comparable predictive accuracies.



2020 ◽  
Vol 34 (01) ◽  
pp. 694-701
Author(s):  
Mengdi Huai ◽  
Di Wang ◽  
Chenglin Miao ◽  
Jinhui Xu ◽  
Aidong Zhang

Pairwise learning has received much attention recently as it is more capable of modeling the relative relationship between pairs of samples. Many machine learning tasks can be categorized as pairwise learning, such as AUC maximization and metric learning. Existing techniques for pairwise learning all fail to take into consideration a critical issue in their design, i.e., the protection of sensitive information in the training set. Models learned by such algorithms can implicitly memorize the details of sensitive information, which offers opportunity for malicious parties to infer it from the learned models. To address this challenging issue, in this paper, we propose several differentially private pairwise learning algorithms for both online and offline settings. Specifically, for the online setting, we first introduce a differentially private algorithm (called OnPairStrC) for strongly convex loss functions. Then, we extend this algorithm to general convex loss functions and give another differentially private algorithm (called OnPairC). For the offline setting, we also present two differentially private algorithms (called OffPairStrC and OffPairC) for strongly and general convex loss functions, respectively. These proposed algorithms can not only learn the model effectively from the data but also provide strong privacy protection guarantee for sensitive information in the training set. Extensive experiments on real-world datasets are conducted to evaluate the proposed algorithms and the experimental results support our theoretical analysis.



2020 ◽  
Vol 19 (8) ◽  
pp. 3973-4005
Author(s):  
Baohuai Sheng ◽  
◽  
Huanxiang Liu ◽  
Huimin Wang


2019 ◽  
Vol 27 (5) ◽  
pp. 1991-2003
Author(s):  
Arne De Keyser ◽  
Hendrik Vansompel ◽  
Guillaume Crevecoeur
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document