Parametric and Nonparametric Classification for Minimizing Misclassification Errors

Author(s):  
Sushma Nagdeote ◽  
Sujata Chiwande
Author(s):  
Edgar Santos‐Fernandez ◽  
Erin E. Peterson ◽  
Julie Vercelloni ◽  
Em Rushworth ◽  
Kerrie Mengersen

2019 ◽  
Vol 11 (6) ◽  
pp. 1716 ◽  
Author(s):  
Luciano Raso ◽  
Jan Kwakkel ◽  
Jos Timmermans

Climate change raises serious concerns for policymakers that want to ensure the success of long-term policies. To guarantee satisfactory decisions in the face of deep uncertainties, adaptive policy pathways might be used. Adaptive policy pathways are designed to take actions according to how the future will actually unfold. In adaptive pathways, a monitoring system collects the evidence required for activating the next adaptive action. This monitoring system is made of signposts and triggers. Signposts are indicators that track the performance of the pathway. When signposts reach pre-specified trigger values, the next action on the pathway is implemented. The effectiveness of the monitoring system is pivotal to the success of adaptive policy pathways, therefore the decision-makers would like to have sufficient confidence about the future capacity to adapt on time. “On time” means activating the next action on a pathway neither so early that it incurs unnecessary costs, nor so late that it incurs avoidable damages. In this paper, we show how mapping the relations between triggers and the probability of misclassification errors inform the level of confidence that a monitoring system for adaptive policy pathways can provide. Specifically, we present the “trigger-probability” mapping and the “trigger-consequences” mappings. The former mapping displays the interplay between trigger values for a given signpost and the level of confidence regarding whether change occurs and adaptation is needed. The latter mapping displays the interplay between trigger values for a given signpost and the consequences of misclassification errors for both adapting the policy or not. In a case study, we illustrate how these mappings can be used to test the effectiveness of a monitoring system, and how they can be integrated into the process of designing an adaptive policy.


2012 ◽  
Vol 41 (10) ◽  
pp. 1813-1832 ◽  
Author(s):  
Lupércio F. Bessegato ◽  
Roberto C. Quinino ◽  
Luiz H. Duczmal ◽  
Linda Lee Ho

Author(s):  
Lacramioara Balan ◽  
Rajesh Paleti

Traditional crash databases that record police-reported injury severity data are prone to misclassification errors. Ignoring these errors in discrete ordered response models used for analyzing injury severity can lead to biased and inconsistent parameter estimates. In this study, a mixed generalized ordered response (MGOR) model that quantifies misclassification rates in the injury severity variable and adjusts the bias in parameter estimates associated with misclassification was developed. The proposed model does this by considering the observed injury severity outcome as a realization from a discrete random variable that depends on true latent injury severity that is unobservable to the analyst. The model was used to analyze misclassification rates in police-reported injury severity in the 2014 General Estimates System (GES) data. The model found that only 68.23% and 62.75% of possible and non-incapacitating injuries were correctly recorded in the GES data. Moreover, comparative analysis with the MGOR model that ignores misclassification not only has lower data fit but also considerable bias in both the parameter and elasticity estimates. The model developed in this study can be used to analyze misclassification errors in ordinal response variables in other empirical contexts.


Sign in / Sign up

Export Citation Format

Share Document