ramp loss
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 8)

H-INDEX

8
(FIVE YEARS 1)

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Cuiqing Zhang ◽  
Maojun Zhang ◽  
Xijun Liang ◽  
Zhonghang Xia ◽  
Jiangxia Nan

Due to its wide applications and learning efficiency, online ordinal regression using perceptron algorithms with interval labels (PRIL) has been increasingly applied to solve ordinal ranking problems. However, it is still a challenge for the PRIL method to handle noise labels, in which case the ranking results may change dramatically. To tackle this problem, in this paper, we propose noise-resilient online learning algorithms using ramp loss function, called PRIL-RAMP, and its nonlinear variant K-PRIL-RAMP, to improve the performance of PRIL method for noisy data streams. The proposed algorithms iteratively optimize the decision function under the framework of online gradient descent (OGD), and we justify the algorithms by showing the order preservation of thresholds. It is validated in the experiments that both approaches are more robust and efficient to noise labels than state-of-the-art online ordinal regression algorithms on real-world datasets.


2020 ◽  
Vol 286 (1) ◽  
pp. 84-100
Author(s):  
Marta Baldomero-Naranjo ◽  
Luisa I. Martínez-Merino ◽  
Antonio M. Rodríguez-Chía

2020 ◽  
pp. 1-20
Author(s):  
Hong Chen ◽  
Changying Guo ◽  
Huijuan Xiong ◽  
Yingjie Wang

Sparse additive machines (SAMs) have attracted increasing attention in high dimensional classification due to their representation flexibility and interpretability. However, most of existing methods are formulated under Tikhonov regularization scheme with the hinge loss, which are susceptible to outliers. To circumvent this problem, we propose a sparse additive machine with ramp loss (called ramp-SAM) to tackle classification and variable selection simultaneously. Misclassification error bound is established for ramp-SAM with the help of detailed error decomposition and constructive hypothesis error analysis. To solve the nonsmooth and nonconvex ramp-SAM, a proximal block coordinate descent method is presented with convergence guarantees. The empirical effectiveness of our model is confirmed on simulated and benchmark datasets.


2020 ◽  
Vol 51 (8) ◽  
pp. 1448-1463
Author(s):  
Huiru Wang ◽  
Sijie Lu ◽  
Zhijian Zhou
Keyword(s):  

2019 ◽  
Vol 7 ◽  
pp. 233-248
Author(s):  
Laura Jehl ◽  
Carolin Lawrence ◽  
Stefan Riezler

In many machine learning scenarios, supervision by gold labels is not available and conse quently neural models cannot be trained directly by maximum likelihood estimation. In a weak supervision scenario, metric-augmented objectives can be employed to assign feedback to model outputs, which can be used to extract a supervision signal for training. We present several objectives for two separate weakly supervised tasks, machine translation and semantic parsing. We show that objectives should actively discourage negative outputs in addition to promoting a surrogate gold structure. This notion of bipolarity is naturally present in ramp loss objectives, which we adapt to neural models. We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and minimum risk training on both weakly supervised tasks, as well as on a supervised machine translation task. Additionally, we introduce a novel token-level ramp loss objective, which is able to outperform even the best sequence-level ramp loss on both weakly supervised tasks.


2018 ◽  
Vol 310 ◽  
pp. 223-235 ◽  
Author(s):  
Yingjie Tian ◽  
Mahboubeh Mirzabagheri ◽  
Seyed Mojtaba Hosseini Bamakan ◽  
Huadong Wang ◽  
Qiang Qu

Sign in / Sign up

Export Citation Format

Share Document