Understanding performance measures for validating default risk models: a review of performance metrics

2007 ◽  
Vol 1 (2) ◽  
pp. 61-79
Author(s):  
Jorge Sobehart ◽  
Sean Keenan
Author(s):  
Lauren-Brooke Eisen ◽  
Miriam Aroni Krinsky

Local prosecutors are responsible for 95 percent of criminal cases in the United States—their charging decisions holding enormous influence over the number of people incarcerated and the length of sentences served. Performance metrics are a tool that can align the vision of elected prosecutors with the tangible actions of their offices’ line attorneys. The right metrics can provide clarity to individual line attorneys around the mission of the office and the goals of their job. Historically, however, prosecutor offices have relied on evaluation metrics that incentivize individual attorneys to prioritize more punitive responses and volume-driven activity—such as tracking the number of cases processed, indictments, guilty pleas, convictions, and sentence lengths. Under these past approaches, funding, budgeting, and promotional decisions are frequently linked to regressive measures that fail to account for just results. As more Americans have embraced the need to end mass incarceration, a new wave of reform-minded district attorneys have won elections. To ensure they are accountable to the voters who elected them into office and achieve the changes they championed, they must align measures of success with new priorities for their offices. New performance metrics predicated on the goals of reducing incarceration and enhancing fairness can shrink prison and jail populations, while improving public trust and promoting healthier and safer communities. The authors propose a new set of metrics for elected prosecutors to consider in designing performance evaluations, both for their offices and for individual attorneys. The authors also suggest that for these new performance measures to effectively drive decarceration practices, they must be coupled with careful, thoughtful implementation and critical data-management infrastructure.


2021 ◽  
Vol 8 (3) ◽  
pp. 135-142
Author(s):  
Shamsulhadi Bandi

An assessment of IJBES's performance since 2015 was presented in this communication using metrics data from Clarivate and the OJS Report Generator. Raw data were analyzed for the purpose of reporting to readers on the journal's performance using performance metrics available to the editor. Key performance metrics such as submissions, acceptance and rejection rates, and citation trends over time were reported and presented to the reader. It has been observed that ensuring balanced content and continuously working on a niche are among the priorities of the journal. It is also necessary to attract relevant and quality manuscripts among the authors to increase citations in other publications. Despite everything, the journal, which is relatively young, was able to withstand the initial test of time and improve its visibility in the scientific community.


Author(s):  
Fang Zhao

The ongoing success of e-partnership requires the constant monitoring and measuring of its progress and outcomes. Many companies rushed into e-partnerships in order to exploit complementary resources that they lacked but knew little about how to make their partnerships work and how to effectively monitor and measure its performance. Even today, many partnerships are left to drift without a system in place to assess the quality of partnerships. So, how can the productivity and health of a partnership be monitored and measured? The biggest challenge relating to performance measurement for e-partnerships is that e-partners are often independent business firms and legal entities with different stakeholders and different business objectives and goals. In the supplychain, for example, one firm can rarely control the entire supply chain’s performance. However, performance measures that can be extended across firm boundaries and processes are needed to measure inter-organizational e-partnerships. The uncertainty and intangibility of e-business and information technology add more complexity and challenges to the measurement of e-partnership performance. Looking at the current literature, it is not hard to find that the development and implementation of performance measurement systems for inter-firm collaboration is still in its infancy. Overall, traditional performance measures do not focus on key inter-firm activities to monitor extended enterprise performance. This chapter reviews and discusses various concepts, models and issues of performance measurement. On the basis of that, the chapter proposes, by taking a balanced scorecard approach, a new set of performance metrics for managers to assess the process and outcome of e-partnerships in a comprehensive manner. The chapter will also help e-partners to benchmark against best practices and determine future direction and priorities in their e-business partnerships.


Author(s):  
Johannes Schobel ◽  
Thomas Probst ◽  
Manfred Reichert ◽  
Winfried Schlee ◽  
Marc Schickler ◽  
...  

To deal with drawbacks of paper-based data collection procedures, the QuestionSys approach empowers researchers with none or little programming knowledge to flexibly configure mobile data collection applications on demand. The mobile application approach of QuestionSys mainly pursues the goal to mitigate existing drawbacks of paper-based collection procedures in mHealth scenarios. Importantly, researchers shall be enabled to gather data in an efficient way. To evaluate the applicability of QuestionSys, several studies have been carried out to measure the efforts when using the framework in practice. In this work, the results of a study that investigated psychological insights on the required mental effort to configure the mobile applications are presented. Specifically, the mental effort for creating data collection instruments is validated in a study with N = 80 participants across two sessions. Thereby, participants were categorized into novices and experts based on prior knowledge on process modeling, which is a fundamental pillar of the developed approach. Each participant modeled 10 instruments during the course of the study, while concurrently several performance measures are assessed (e.g., time needed or errors). The results of these measures are then compared to the self-reported mental effort with respect to the tasks that had to be modeled. On one hand, the obtained results reveal a strong correlation between mental effort and performance measures. On the other, the self-reported mental effort decreased significantly over the course of the study, and therefore had a positive impact on measured performance metrics. Altogether, this study indicates that novices with no prior knowledge gain enough experience over the short amount of time to successfully model data collection instruments on their own. Therefore, QuestionSys is a helpful instrument to properly deal with large-scale data collection scenarios like clinical trials.


2016 ◽  
Vol 69 ◽  
pp. 436-459 ◽  
Author(s):  
Cristina Arellano ◽  
Lilia Maliar ◽  
Serguei Maliar ◽  
Viktor Tsyrennikov

2011 ◽  
Vol 15 (1) ◽  
pp. 31
Author(s):  
Robert C. Kee ◽  
Michael T. Dugan

<span>In a recent article appearing in this journal, Foster, Sullivan, and Ward (FSW) examined the assertion of the theory of constraints (TOC) and just in time that holding inventory is harmful or a liability to a firms operations. In this comment we demonstrate that inventory is not inherently a liability but rather is a symptom of more fundamental problems within many firms operations. Therefore, addressing these problems rather than inventory per se is the primary means of relieving a firms financial distress. In this comment we also examine the FSW assertion that more detailed inventory information should be reported to enable financial statement users to construct the performance measures of the TOC. The performance metrics of the TOC are short-term measures of economic performance and represent a small subset of the information used to guide managerial decisions. Consequently, external financial statement users who have a longer decision horizon and who do not have access to the firm specific information with which the TOC is used would derive limited benefit from TOC performance measures.</span>


Author(s):  
Gleeson Simon

This chapter discusses trading book models. Risk models come in a variety of types. However, for market risk purposes there have been a number of types which may be used within the framework. The simplest is the ‘CAD 1’ model — named after the first Capital Adequacy Directive, which permitted such models to be used in the calculation of regulatory capital. VaR models, permitted by Basel 2, were more complex, and this complexity was increased by Basel 2.5, which required the use of ‘stressed VAR’. In due course all of this will be replaced by the Basel 3 FRTB calculation, which rejects VAR and is based on the calculation of an expected shortfall (ES) market risk charge, a VaR based default risk charge (DRC) (for those exposures where the bank is exposed to the default of a third party), and a stressed ES-based capital add-on.


2020 ◽  
Vol 7 (6) ◽  
pp. 191649
Author(s):  
J. D. Turiel ◽  
T. Aste

Logistic regression (LR) and support vector machine algorithms, together with linear and nonlinear deep neural networks (DNNs), are applied to lending data in order to replicate lender acceptance of loans and predict the likelihood of default of issued loans. A two-phase model is proposed; the first phase predicts loan rejection, while the second one predicts default risk for approved loans. LR was found to be the best performer for the first phase, with test set recall macro score of 77.4 % . DNNs were applied to the second phase only, where they achieved best performance, with test set recall score of 72 % , for defaults. This shows that artificial intelligence can improve current credit risk models reducing the default risk of issued loans by as much as 70 % . The models were also applied to loans taken for small businesses alone. The first phase of the model performs significantly better when trained on the whole dataset. Instead, the second phase performs significantly better when trained on the small business subset. This suggests a potential discrepancy between how these loans are screened and how they should be analysed in terms of default prediction.


2006 ◽  
Vol 25 (1) ◽  
pp. 19-21 ◽  
Author(s):  
K A Mundt

The US Environmental Protection Agency (EPA) recently issued a Staff Paper that articulates current risk assessment practices. In section 4.1.3, EPA states,“...effects that appear to be adaptive, non–adverse, or beneficial may not be mentioned.” This statement may be perceived as precluding risk assessments based on non–default risk models, including the hormetic–or biphasicdose–response model. This commentary examines several potential interpretations of this statement and the anticipated impact of ignoring hormesis, if present, in light of necessary conservatism for protecting human and environmental health, and the potential for employing alternative risk assessment approaches.


Sign in / Sign up

Export Citation Format

Share Document