scholarly journals Predicting Non-performing Loans by Financial Ratios for Small and Medium Entities in Lebanon

2015 ◽  
Vol 1 (2) ◽  
pp. 115
Author(s):  
Samih Antoine Azar ◽  
Marybel Nasr

This study examines the ability of financial ratios in predicting the financial state of small and medium entities (SME) in Lebanon. This financial state can be either one of well-performing loans or one of non-performing loans. An empirical study is conducted using a data analysis of the financial statements of 222 SMEs in Lebanon for the years 2011 and 2012, of which 187 have currently well-performing loans and 35 have currently non-performing loans. Altman Z-scores are calculated, independent samples t-tests are performed, and models are developed using the binary logistic regression. Empirical evidence shows that the Altman Z-scores are able to predict well the solvent state of SMEs having well-performing loans, but are unable to predict accurately the bankruptcy state of the SMEs having non-performing loans. The independent samples t-tests revealed that five financial ratios are statistically significantly different between SMEs having well-performing loans and those having non-performing loans. Finally, a logistic regression model is developed for each year under study with limited success. In all cases accuracy results are inferred showing the percentage of companies that are accurately classified for being solvent and bankrupt, in addition to the two standard measures of error: the Type I errors and the Type II errors. Although a high accuracy is achieved in correctly classifying non-distressed and distressed firms, the Type I errors are in general relatively large. By contrast the Type II errors are in general relatively low.

2019 ◽  
Vol 8 (4) ◽  
pp. 1849-1853

Nowadays people are interested to avail loans in banks for their needs, but providing loans to all people is not possible to banks, so they are using some measures to identify eligible customers. To measure the performance of categorical variables sensitivity and specificity are widely used in Medical and tangentially in econometrics, after using some measures also if banks provide the loans to the wrong customers whom might not able to repay the loans, and not providing to customers who can repay will lead to the type I errors and type II errors, to minimize these errors, this study explains one, how to know sensitivity is large or small and second to study the bench marks on forecasting the model by Fuzzy analysis based on fuzzy based weights and it is compared with the sensitivity analysis.


2019 ◽  
Vol 100 (10) ◽  
pp. 1987-2007 ◽  
Author(s):  
Thomas Knutson ◽  
Suzana J. Camargo ◽  
Johnny C. L. Chan ◽  
Kerry Emanuel ◽  
Chang-Hoi Ho ◽  
...  

AbstractAn assessment was made of whether detectable changes in tropical cyclone (TC) activity are identifiable in observations and whether any changes can be attributed to anthropogenic climate change. Overall, historical data suggest detectable TC activity changes in some regions associated with TC track changes, while data quality and quantity issues create greater challenges for analyses based on TC intensity and frequency. A number of specific published conclusions (case studies) about possible detectable anthropogenic influence on TCs were assessed using the conventional approach of preferentially avoiding type I errors (i.e., overstating anthropogenic influence or detection). We conclude there is at least low to medium confidence that the observed poleward migration of the latitude of maximum intensity in the western North Pacific is detectable, or highly unusual compared to expected natural variability. Opinion on the author team was divided on whether any observed TC changes demonstrate discernible anthropogenic influence, or whether any other observed changes represent detectable changes. The issue was then reframed by assessing evidence for detectable anthropogenic influence while seeking to reduce the chance of type II errors (i.e., missing or understating anthropogenic influence or detection). For this purpose, we used a much weaker “balance of evidence” criterion for assessment. This leads to a number of more speculative TC detection and/or attribution statements, which we recognize have substantial potential for being false alarms (i.e., overstating anthropogenic influence or detection) but which may be useful for risk assessment. Several examples of these alternative statements, derived using this approach, are presented in the report.


1990 ◽  
Vol 15 (3) ◽  
pp. 237-247 ◽  
Author(s):  
Rand R. Wilcox

Let X and Y be dependent random variables with variances σ2x and σ2y. Recently, McCulloch (1987) suggested a modification of the Morgan-Pitman test of Ho: σ2x=σ2y But, as this paper describes, there are situations where McCulloch’s procedure is not robust. A subsample approach, similar to the Box-Scheffe test, is also considered and found to give conservative results, in terms of Type I errors, for all situations considered, but it yields relatively low power. New results on the Sandvik-Olsson procedure are also described, but the procedure is found to be nonrobust in situations not previously considered, and its power can be low relative to the two other techniques considered here. A modification of the Morgan-Pitman test based on the modified maximum likelihood estimate of a correlation is also considered. This last procedure appears to be robust in situations where the Sandvik-Olsson (1982) and McCulloch procedures are robust, and it can have more power than the Sandvik-Olsson. But it too gives unsatisfactory results in certain situations. Thus, in terms of power, McCulloch’s procedure is found to be best, with the advantage of being simple to use. But, it is concluded that, in terms of controlling both Type I and Type II errors, a satisfactory solution does not yet exist.


1993 ◽  
Vol 76 (2) ◽  
pp. 407-412 ◽  
Author(s):  
Donald W. Zimmerman

This study investigated violations of random sampling and random assignment in data analyzed by nonparametric significance tests. A computer program induced correlations within groups, as well as between groups, and performed one-sample and two-sample versions of the Mann-Whitney-Wilcoxon test on the resulting scores. Nonindependence of observations within groups spuriously inflated the probability of Type I errors and depressed the probability of Type II errors, and nonindependence between groups had the reverse effect. This outcome, which parallels the influence of nonindependence on parametric tests, can be explained by the equivalence of the Mann-Whitney-Wilcoxon test and the Student t test performed on ranks replacing the initial scores.


2021 ◽  
Author(s):  
Antonia Vehlen ◽  
William Standard ◽  
Gregor Domes

Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer’s test conditions, validation is essential with regard to data quality and other factors potentially threatening data validity. In this study, we evaluated the impact of data accuracy and areas of interest (AOIs) size on the classification of simulated gaze data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying data accuracy. As hypothesized, we found that data accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed for falsely classified gaze inside AOIs (Type I errors) and falsely classified gaze outside the predefined AOIs (Type II errors). The results indicate that smaller AOIs generally minimize false classifications as long as data accuracy is good enough. For studies with lower data accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of an increased probability of Type I errors. Proper estimation of data accuracy is therefore essential for making informed decisions regarding the size of AOIs.


1992 ◽  
Vol 75 (3) ◽  
pp. 1011-1020 ◽  
Author(s):  
Donald W. Zimmerman ◽  
Richard H. Williams ◽  
Bruno D. Zumbo

A computer-simulation study examined the one-sample Student t test under violation of the assumption of independent sample observations. The probability of Type I errors increased, and the probability of Type II errors decreased, spuriously elevating the entire power function. The magnitude of the change depended on the correlation between pairs of sample values as well as the number of sample values that were pairwise correlated. A modified t statistic, derived from an unbiased estimate of the population variance that assumed only exchangeable random variables instead of independent, identically distributed random variables, effectively corrected for nonindependence for all degrees of correlation and restored the probability of Type I and Type II errors to their usual values.


2009 ◽  
Vol 84 (5) ◽  
pp. 1395-1428 ◽  
Author(s):  
Joseph V. Carcello ◽  
Ann Vanstraelen ◽  
Michael Willenborg

ABSTRACT: We study going-concern (GC) reporting in Belgium to examine the effects associated with a shift toward rules-based audit standards. Beginning in 2000, a major revision in Belgian GC audit standards took effect. Among its changes, auditors must ascertain whether their clients are in compliance with two “financial-juridical criteria” for board of directors' GC disclosures. In a study of a sample of private Belgian companies, we report two major findings. First, there is a decrease in auditor Type II errors, particularly by non-Big 6/5 auditors for their clients that fail both criteria. Second, there is an increase in Type I errors, again particularly for companies that fail both criteria. We also conduct an ex post analysis of the decrease in Type II errors and the increase in Type I errors. Our findings suggest the standard engenders both favorable and unfavorable effects, the net of which depends on the priorities assigned to the affected parties (creditors, auditors, companies, and employees).


2018 ◽  
Vol 18 (1) ◽  
pp. 29-52 ◽  
Author(s):  
Nathan R. Berglund ◽  
Donald R. Herrmann ◽  
Bradley P. Lawson

ABSTRACT Current audit guidance directs the auditor to modify their opinion in the presence of significant doubt about their client's ability to continue as a going concern. This paper examines whether managerial ability influences the accuracy of auditors' going concern information signal. Following prior literature, we assess accuracy based on the subsequent viability of the client. We find that, while managerial ability decreases the risk of Type I errors (the auditor issues a going concern opinion for a firm that subsequently remains viable), managerial ability increases the risk of Type II errors (the auditor issues a standard unqualified report for a firm that subsequently files for bankruptcy). Considering prior research indicates that the auditor's opinion provides important information to the market, this finding has important public interest implications regarding the signaling of bankruptcy risk to investors and creditors by auditors' going concern opinion.


Author(s):  
Paul Zarowin

This article reviews recent research on the estimation of discretionary accruals and the detection of earnings management. There has been an explosive growth in research on accrual earnings management over the past twenty years, and almost all has used the Jones (1991) model or one of its close derivatives. Nevertheless, a growing literature has addressed the model’s problems and attempted to improve its estimation of discretionary accruals. The model’s incomplete characterization of how nondiscretionary accruals are determined by the firm’s operations can cause either Type I or Type II errors. This article categorizes recent articles into four groups based on their focus and solution, and while there is no panacea for the problems and no consensus on a new model or method, research offers hope that accrual earnings management is more likely to be detected when it exists and is less likely to be erroneously detected when it is absent (i.e., lower Type II and Type I errors, respectively).


2014 ◽  
Vol 29 (2) ◽  
pp. 161-170 ◽  
Author(s):  
Gabriel Constantino Blain

The Pre-Whitening (PW), the Trend-Free Pre-Whitening (TFPW) and the Modified Trend-Free Pre-Whitening (MTFPW) were developed to remove the influence of serial correlations on the Mann-Kendall trend test. The main purpose of this study was to compare the performance of these algorithms for evaluating trends in auto-correlated series. The PW, TFPW and MTFPW were also applied to the monthly values of the rainfall (Pre), minimum (Tmin) and maximum (Tmax) air temperature data obtained from the weather station of Ribeirão Preto, State of São Paulo, Brazil. Sets of Monte Carlo simulations were carried out to evaluate the occurrence of the type I and the type II errors obtained from these three algorithms. The TFPW has the highest power. However, it also presented the highest occurrence of type I errors. The PW clearly limits the influence of serial correlation on the occurrence of type I errors. Nevertheless, this feature is accomplished at a cost of a great reduction of its ability to detect trends. The MTFPW leads to a better balance between the probabilities of both statistical errors. It was also concluded that the hypothesis of the presence of no climate change in the location of Ribeirão Pareto cannot be accepted.


Sign in / Sign up

Export Citation Format

Share Document