reject option
Recently Published Documents


TOTAL DOCUMENTS

103
(FIVE YEARS 13)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Baek Hwan Cho ◽  
Borum Nam ◽  
Joo Young Kim ◽  
In Young Kim

BACKGROUND In any healthcare system, both the classification of data and the confidence level of the classification are important. A selective prediction model is therefore needed to classify time-series health data according to confidence levels of prediction. OBJECTIVE The aim of this study is to develop a method using Long short-term memory (LSTM) models with reject option for time-series health data classification. METHODS To implement a reject option of classification output in LSTM models, an existing selective prediction method was adopted. However, a conventional selection function approach to LSTM does not achieve acceptable performance at the learning stage. To tackle this problem, we propose unit-wise batch standardization (UBS), which attempts to normalize each hidden unit in LSTM to reflect the structural characteristics of LSTM with respect to selection function. RESULTS From the results, the ability of our method to approximate the target confidence level was compared by coverage violations for two time series health datasets consisting of human activity and arrhythmia. For both datasets, our approach yielded lower average coverage violations (0.98% and 1.79% for each dataset) than conventional approach. In addition, the classification performance using the reject option was compared with other normalization methods. Our method demonstrates superior performance with respect to selective risk (12.63% and 17.82% for each dataset), false-positive rates (2.09% and 5.80% for each dataset), and false-negative rates (10.58% and 17.24% for each dataset). CONCLUSIONS We conclude that our normalization approach can help make selective predictions for time-series health data. We expect this technique will give users more confidence in classification systems and improve collaborative efforts between human and artificial intelligence levels in the medical field through the use of classification that reflects confidence.


2020 ◽  
Vol 50 (10) ◽  
pp. 3090-3100 ◽  
Author(s):  
Lei Lei ◽  
Yafei Song ◽  
Xi Luo

Abstract When training base classifier by ternary Error Correcting Output Codes (ECOC), it is well know that some classes are ignored. On this account, a non-competent classifier emerges when it classify an instance whose real label does not belong to the meta-subclasses. Meanwhile, the classic ECOC dichotomizers can only produce binary outputs and have no capability of rejection for classification. To overcome the non-competence problem and better model the multi-class problem for reducing the classification cost, we embed reject option to ECOC and present a new variant of ECOC algorithm called as Reject-Option-based Re-encoding ECOC (ROECOC). The cost-sensitive classification model and cost-loss function based on Receiver Operating Characteristic (ROC) curve are built respectively. The optimal reject threshold values are obtained by combing the condition to be met for minimizing the loss function and the ROC convex hull. In so doing, reject option (t1, t2) provides a three-symbol output to make dichotomizers more competent and ROECOC more universal and practical for cost-sensitive classification issue. Experimental results on two kinds of datasets show that our scheme with low-degree freedom of initialized ECOC can effectively enhance accuracy and reduce cost.


2020 ◽  
Vol 34 (04) ◽  
pp. 5652-5659
Author(s):  
Kulin Shah ◽  
Naresh Manwani

Active learning is an important technique to reduce the number of labeled examples in supervised learning. Active learning for binary classification has been well addressed in machine learning. However, active learning of the reject option classifier remains unaddressed. In this paper, we propose novel algorithms for active learning of reject option classifiers. We develop an active learning algorithm using double ramp loss function. We provide mistake bounds for this algorithm. We also propose a new loss function called double sigmoid loss function for reject option and corresponding active learning algorithm. We offer a convergence guarantee for this algorithm. We provide extensive experimental results to show the effectiveness of the proposed algorithms. The proposed algorithms efficiently reduce the number of label examples required.


2020 ◽  
Vol 34 (04) ◽  
pp. 5684-5691
Author(s):  
Song-Qing Shen ◽  
Bin-Bin Yang ◽  
Wei Gao

Making an erroneous decision may cause serious results in diverse mission-critical tasks such as medical diagnosis and bioinformatics. Previous work focuses on classification with a reject option, i.e., abstain rather than classify an instance of low confidence. Most mission-critical tasks are always accompanied with class imbalance and cost sensitivity, where AUC has been shown a preferable measure than accuracy in classification. In this work, we propose the framework of AUC optimization with a reject option, and the basic idea is to withhold the decision of ranking a pair of positive and negative instances with a lower cost, rather than mis-ranking. We obtain the Bayes optimal solution for ranking, and learn the reject function and score function for ranking, simultaneously. An online algorithm has been developed for AUC optimization with a reject option, by considering the convex relaxation and plug-in rule. We verify, both theoretically and empirically, the effectiveness of the proposed algorithm.


Author(s):  
Kulin Shah ◽  
Naresh Manwani

In this paper, we propose an approach for learning sparse reject option classifiers using double ramp loss Ldr. We use DC programming to find the risk minimizer. The algorithm solves a sequence of linear programs to learn the reject option classifier. We show that the loss Ldr is Fisher consistent. We also show that the excess risk of loss Ld is upper bounded by excess risk of Ldr. We derive the generalization error bounds for the proposed approach. We show the effectiveness of the proposed approach by experimenting it on several real world datasets. The proposed approach not only performs comparable to the state of the art, it also successfully learns sparse classifiers.


Sign in / Sign up

Export Citation Format

Share Document