scholarly journals Empirics of Korean Shipping Companies’ Default Predictions

Risks ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 159
Author(s):  
Sunghwa Park ◽  
Hyunsok Kim ◽  
Janghan Kwon ◽  
Taeil Kim

In this paper, we use a logit model to predict the probability of default for Korean shipping companies. We explore numerous financial ratios to find predictors of a shipping firm’s failure and construct four default prediction models. The results suggest that a model with industry specific indicators outperforms other models in predictive ability. This finding indicates that utilizing information about unique financial characteristics of the shipping industry may enhance the performance of default prediction models. Given the importance of the shipping industry in the Korean economy, this study can benefit both policymakers and market participants.

2018 ◽  
Vol 35 (4) ◽  
pp. 542-563 ◽  
Author(s):  
Linda Gabbianelli

Purpose The purpose of this paper is to test whether the qualitative variables regarding the territory and the firm–territory relationship can improve the accuracy rates of small business default prediction models. Design/methodology/approach The authors apply a logistic regression to a sample of 141 small Italian enterprises located in the Marche region, and the authors build two different default prediction models: one using only financial ratios and one using jointly financial ratios and variables related to the relationship between firm and territory. Findings Including variables regarding the relationships between firms and their territory, the accuracy rates of the default prediction model are significantly improved. Research limitations/implications The qualitative variables data collected are affected by subjective judgments of respondents of the firms studied. In addition, neither other qualitative variables (such as those regarding competitive strategies, or managerial skills) are included nor those variables regarding the relationships between firms and financial institutions are included. Practical implications The study suggests that financial institutions should include territory qualitative variables, and, above all, qualitative variables regarding the firm–territory relationship, when constructing business default prediction models. Including this type of variables, it could be able to reduce the tendency to place unnecessary restrictions on credit. Originality/value The field of business failure prediction modeling using variables regarding the relationship between firm–territory is a unexplored area as it count of a very few studies.


2018 ◽  
Vol 13 (4) ◽  
pp. 57
Author(s):  
Francesco Ciampi

This study aims to verify the potential of combining prior payment behavior variables and financial ratios for SE default prediction modelling. Logistic regression was applied to a sample of 980 Italian SEs in order to calculate and compare two categories of default prediction models, one exclusively based on financial ratios and the other based also on company payment behavior related variables. The main findings are: 1) using prior payment behavior variables significantly improves the effectiveness of SE default prediction modelling; ii) the longer the forecast horizon and/or the smaller the size of the firms which are the object of analysis, the higher  the improvements in prediction accuracy that can be obtained by using also prior payment behavior variables as default predictors; iii) SE default prediction modelling should be separately implemented for different size groups of firms.


2015 ◽  
Vol 8 (3) ◽  
pp. 1-23
Author(s):  
Vandana Gupta

This paper attempts to evaluate the predictive ability of three default prediction models: the market-based KMV model, the Z-score model using discriminant analysis (DA), and the logit model; and identifies the key default drivers. The research extends prior empirical work by modeling and testing the impact of financial ratios, macro-economic factors, corporate governance and firm-specific variables in predicting default. For the market-based model, the author has extended the works of KMV in developing a suitable algorithm for determining probability of default (PD). While for the KMV model, the continuous observations of PD are used as the dependent variable, for the accounting-based models, ratings assigned are the proxy for default (those rated ’D’ are defaulted and rated ‘AAA’ and ‘A’ are solvent). The research findings largely support the hypothesis that solvency, profitability and liquidity ratios do impact the default risk, but adding other covariates improves the predictive ability of the models. Through this study, the author recommends that accounting –based models and market based models are conceptually different. While market-based models are forward looking and inclusion of market data makes the default risk quantifiable; to make the PD more exhaustive, it is important to factor in the information provided in the financial statements. The conclusions drawn are that the disclosures in financial statements can help predict default risk as financial distress risk is likely to evolve over time and will be reflected in financial statements beyond accounting ratios. Moreover this will also help divulge “creative accounting” practices by corporates.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Menelaos Pavlou ◽  
Gareth Ambler ◽  
Rumana Z. Omar

Abstract Background Clustered data arise in research when patients are clustered within larger units. Generalised Estimating Equations (GEE) and Generalised Linear Models (GLMM) can be used to provide marginal and cluster-specific inference and predictions, respectively. Methods Confounding by Cluster (CBC) and Informative cluster size (ICS) are two complications that may arise when modelling clustered data. CBC can arise when the distribution of a predictor variable (termed ‘exposure’), varies between clusters causing confounding of the exposure-outcome relationship. ICS means that the cluster size conditional on covariates is not independent of the outcome. In both situations, standard GEE and GLMM may provide biased or misleading inference, and modifications have been proposed. However, both CBC and ICS are routinely overlooked in the context of risk prediction, and their impact on the predictive ability of the models has been little explored. We study the effect of CBC and ICS on the predictive ability of risk models for binary outcomes when GEE and GLMM are used. We examine whether two simple approaches to handle CBC and ICS, which involve adjusting for the cluster mean of the exposure and the cluster size, respectively, can improve the accuracy of predictions. Results Both CBC and ICS can be viewed as violations of the assumptions in the standard GLMM; the random effects are correlated with exposure for CBC and cluster size for ICS. Based on these principles, we simulated data subject to CBC/ICS. The simulation studies suggested that the predictive ability of models derived from using standard GLMM and GEE ignoring CBC/ICS was affected. Marginal predictions were found to be mis-calibrated. Adjusting for the cluster-mean of the exposure or the cluster size improved calibration, discrimination and the overall predictive accuracy of marginal predictions, by explaining part of the between cluster variability. The presence of CBC/ICS did not affect the accuracy of conditional predictions. We illustrate these concepts using real data from a multicentre study with potential CBC. Conclusion Ignoring CBC and ICS when developing prediction models for clustered data can affect the accuracy of marginal predictions. Adjusting for the cluster mean of the exposure or the cluster size can improve the predictive accuracy of marginal predictions.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Michelle Louise Gatt ◽  
Maria Cassar ◽  
Sandra C. Buttigieg

Purpose The purpose of this paper is to identify and analyse the readmission risk prediction tools reported in the literature and their benefits when it comes to healthcare organisations and management.Design/methodology/approach Readmission risk prediction is a growing topic of interest with the aim of identifying patients in particular those suffering from chronic diseases such as congestive heart failure, chronic obstructive pulmonary disease and diabetes, who are at risk of readmission. Several models have been developed with different levels of predictive ability. A structured and extensive literature search of several databases was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-analysis strategy, and this yielded a total of 48,984 records.Findings Forty-three articles were selected for full-text and extensive review after following the screening process and according to the eligibility criteria. About 34 unique readmission risk prediction models were identified, in which their predictive ability ranged from poor to good (c statistic 0.5–0.86). Readmission rates ranged between 3.1 and 74.1% depending on the risk category. This review shows that readmission risk prediction is a complex process and is still relatively new as a concept and poorly understood. It confirms that readmission prediction models hold significant accuracy at identifying patients at higher risk for such an event within specific context.Research limitations/implications Since most prediction models were developed for specific populations, conditions or hospital settings, the generalisability and transferability of the predictions across wider or other contexts may be difficult to achieve. Therefore, the value of prediction models remains limited to hospital management. Future research is indicated in this regard.Originality/value This review is the first to cover readmission risk prediction tools that have been published in the literature since 2011, thereby providing an assessment of the relevance of this crucial KPI to health organisations and managers.


Author(s):  
Eva–Maria Walz ◽  
Marlon Maranan ◽  
Roderick van der Linden ◽  
Andreas H. Fink ◽  
Peter Knippertz

AbstractCurrent numerical weather prediction models show limited skill in predicting low-latitude precipitation. To aid future improvements, be it with better dynamical or statistical models, we propose a well-defined benchmark forecast. We use the arguably best currently high-resolution, gauge-calibrated, gridded precipitation product, the Integrated Multi-Satellite Retrievals for GPM (Global Precipitation Measurement) (IMERG) “final run” in a ± 15-day window around the date of interest to build an empirical climatological ensemble forecast. This window size is an optimal compromise between statistical robustness and flexibility to represent seasonal changes. We refer to this benchmark as Extended Probabilistic Climatology (EPC) and compute it on a 0.1°×0.1° grid for 40°S–40°N and the period 2001–2019. In order to reduce and standardize information, a mixed Bernoulli-Gamma distribution is fitted to the empirical EPC, which hardly affects predictive performance. The EPC is then compared to 1-day ensemble predictions from the European Centre for Medium-Range Weather Forecasts (ECMWF) using standard verification scores. With respect to rainfall amount, ECMWF performs only slightly better than EPS over most of the low latitudes and worse over high-mountain and dry oceanic areas as well as over tropical Africa, where the lack of skill is also evident in independent station data. For rainfall occurrence, EPC is superior over most oceanic, coastal, and mountain regions, although the better potential predictive ability of ECMWF indicates that this is mostly due to calibration problems. To encourage the use of the new benchmark, we provide the data, scripts, and an interactive webtool to the scientific community.


2021 ◽  
Vol 9 (4) ◽  
pp. 65
Author(s):  
Daniela Rybárová ◽  
Helena Majdúchová ◽  
Peter Štetka ◽  
Darina Luščíková

The aim of this paper is to assess the reliability of alternative default prediction models in local conditions, with subsequent comparison with other generally known and globally disseminated default prediction models, such as Altman’s Z-score, Quick Test, Creditworthiness Index, and Taffler’s Model. The comparison was carried out on a sample of 90 companies operating in the Slovak Republic over a period of 3 years (2016, 2017, and 2018) with a narrower focus on three sectors: construction, retail, and tourism, using alternative default prediction models, such as CH-index, G-index, Binkert’s Model, HGN2 Model, M-model, Gulka’s Model, Hurtošová’s Model, Model of Delina and Packová, and Binkert’s Model. To verify the reliability of these models, tests of the significance of statistical hypotheses were used, such as type I and type II error. According to research results, the highest reliability and accuracy was achieved by an alternative local Model of Delina and Packová. The least reliable results within the list of models were reported by the most globally disseminated model, Altman’s Z-score. Significant differences between sectors were identified.


Stroke ◽  
2015 ◽  
Vol 46 (suppl_1) ◽  
Author(s):  
Blessing Jaja ◽  
Hester Lingsma ◽  
Ewout Steyerberg ◽  
R. Loch Macdonald ◽  

Background: Aneurysmal subarachnoid hemorrhage (SAH) is a cerebrovascular emergency. Currently, clinicians have limited tools to estimate outcomes early after hospitalization. We aimed to develop novel prognostic scores using large cohorts of patients reflecting experience from different settings. Methods: Logistic regression analysis was used to develop prediction models for mortality and unfavorable outcomes according to 3-month Glasgow outcome score after SAH based on readily obtained parameters at hospital admission. The development cohort was derived from 10 prospective studies involving 10936 patients in the Subarachnoid Hemorrhage International Trialists (SAHIT) repository. Model performance was assessed by bootstrap internal validation and by cross validation by omission of each of the 10 studies, using R2 statistic, Area under the receiver operating characteristics curve (AUC), and calibration plots. Prognostic scores were developed from the regression coefficients. Results: Predictor variable with the strongest prognostic strength was neurologic status (partial R2 = 12.03%), followed by age (1.91%), treatment modality (1.25%), Fisher grade of CT clot burden (0.65%), history of hypertension (0.37%), aneurysm size (0.12%) and aneurysm location (0.06%). These predictors were combined to develop 3 sets of hierarchical scores based on the coefficients of the regression models. The AUC at bootstrap validation was 0.79-0.80, and at cross validation was 0.64-0.85. Calibration plots demonstrated satisfactory agreement between predicted and observed probabilities of the outcomes. Conclusions: The novel prognostic scores have good predictive ability and potential for broad application as they have been developed from prospective cohorts reflecting experience from different centers globally.


Author(s):  
G. A. Rekha Pai ◽  
G. A. Vijayalakshmi Pai

Industrial bankruptcy is a rampant problem which does not occur overnight and when it occurs can cause acute financial embarrassment to Governments and financial institutions as well as threaten the very viability of the firms. It is therefore essential to help industries identify the impending trouble early. Several statistical and soft computing based bankruptcy prediction models that make use of financial ratios as indicators have been proposed. Majority of these models make use of a selective set of financial ratios chosen according to some appropriate criteria framed by the individual investigators. In contrast, this study considers any number of financial ratios irrespective of the industrial category and size and makes use of Principal Component Analysis to extract their principal components, to be used as predictors, thereby dispensing with the cumbersome selection procedures used by its predecessors. An Evolutionary Neural Network (ENN) and a Backpropagation Neural Network with Levenberg Marquardt’s training rule (BPN) have been employed as classifiers and their performance has been compared using Receiver Operating Characteristics (ROC) analyses. Termed PCA-ENN and PCA-BPN models, the predictive potential of the two models have been analyzed over a financial database (1997-2000) pertaining to 34 sick and 38 non sick Indian manufacturing companies, with 21 financial ratios as predictor variables.


2017 ◽  
Vol 12 (12) ◽  
pp. 251 ◽  
Author(s):  
Francesco Ciampi

The existing literature has proved the effectiveness of financial ratios for company default prediction modelling. However, such researches rarely focus on small enterprises (SEs) as specific units of analysis. The aim of this paper is to demonstrate that SE default prediction should be modelled separately from that of large and medium-sized firms. In fact, a multivariate discriminant analysis was applied to a sample of 2,200 small manufacturing firms located in Central Italy and a SE default prediction model was developed based on a selected group of financial ratios and specifically constructed to capture the specificities of SEs’ risk profiles. Subsequently, the prediction accuracy rates obtained by this model were compared with those obtained from a second model based on a sample of 3,200 manufacturing firms situated in Central Italy which belong to all dimensional classes. The findings are the following: 1) evaluating the probability of default of SEs separately from that of larger firms improves prediction performance; 2) the predictive power of the discriminant function improves if it takes into account the different profiles of firms operating in different industry sectors; 3) this improvement is much greater for SEs compared to larger firms.


Sign in / Sign up

Export Citation Format

Share Document