scholarly journals A boosting algorithm for learning bipartite ranking functions with partially labeled data

Author(s):  
Massih Reza Amini ◽  
Tuong Vinh Truong ◽  
Cyril Goutte
2011 ◽  
Vol 175 (7-8) ◽  
pp. 1223-1250 ◽  
Author(s):  
Willem Waegeman ◽  
Bernard De Baets

2015 ◽  
Vol 50 (6) ◽  
pp. 608-618 ◽  
Author(s):  
Laure Gonnord ◽  
David Monniaux ◽  
Gabriel Radanne
Keyword(s):  

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Osman Mamun ◽  
Madison Wenzlick ◽  
Arun Sathanur ◽  
Jeffrey Hawk ◽  
Ram Devanathan

AbstractThe Larson–Miller parameter (LMP) offers an efficient and fast scheme to estimate the creep rupture life of alloy materials for high-temperature applications; however, poor generalizability and dependence on the constant C often result in sub-optimal performance. In this work, we show that the direct rupture life parameterization without intermediate LMP parameterization, using a gradient boosting algorithm, can be used to train ML models for very accurate prediction of rupture life in a variety of alloys (Pearson correlation coefficient >0.9 for 9–12% Cr and >0.8 for austenitic stainless steels). In addition, the Shapley value was used to quantify feature importance, making the model interpretable by identifying the effect of various features on the model performance. Finally, a variational autoencoder-based generative model was built by conditioning on the experimental dataset to sample hypothetical synthetic candidate alloys from the learnt joint distribution not existing in both 9–12% Cr ferritic–martensitic alloys and austenitic stainless steel datasets.


Author(s):  
Hai Tao ◽  
Maria Habib ◽  
Ibrahim Aljarah ◽  
Hossam Faris ◽  
Haitham Abdulmohsin Afan ◽  
...  

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Colin Griesbach ◽  
Benjamin Säfken ◽  
Elisabeth Waldmann

Abstract Gradient boosting from the field of statistical learning is widely known as a powerful framework for estimation and selection of predictor effects in various regression models by adapting concepts from classification theory. Current boosting approaches also offer methods accounting for random effects and thus enable prediction of mixed models for longitudinal and clustered data. However, these approaches include several flaws resulting in unbalanced effect selection with falsely induced shrinkage and a low convergence rate on the one hand and biased estimates of the random effects on the other hand. We therefore propose a new boosting algorithm which explicitly accounts for the random structure by excluding it from the selection procedure, properly correcting the random effects estimates and in addition providing likelihood-based estimation of the random effects variance structure. The new algorithm offers an organic and unbiased fitting approach, which is shown via simulations and data examples.


Author(s):  
Fabio Sigrist

AbstractWe introduce a novel boosting algorithm called ‘KTBoost’ which combines kernel boosting and tree boosting. In each boosting iteration, the algorithm adds either a regression tree or reproducing kernel Hilbert space (RKHS) regression function to the ensemble of base learners. Intuitively, the idea is that discontinuous trees and continuous RKHS regression functions complement each other, and that this combination allows for better learning of functions that have parts with varying degrees of regularity such as discontinuities and smooth parts. We empirically show that KTBoost significantly outperforms both tree and kernel boosting in terms of predictive accuracy in a comparison on a wide array of data sets.


2021 ◽  
Vol 544 ◽  
pp. 500-518 ◽  
Author(s):  
Can Gao ◽  
Jie Zhou ◽  
Duoqian Miao ◽  
Jiajun Wen ◽  
Xiaodong Yue

Sign in / Sign up

Export Citation Format

Share Document