scholarly journals Forecasting the Risks of an Organization Operating Natural Gas Vehicles Using a Scoring Model of Logistic Regression in the Presence of Expert Restrictions

Author(s):  
Andrey Evstifeev

The paper proposes a method and describes a mathematical model for express analysis of the attractiveness of the operation of vehicles running on natural gas for a motor transport company. The proposed solution is based on a logistic regression scoring model used by banks to assess the creditworthiness of a borrower. To improve the quality of the results, the model is extended with a set of expert restrictions formulated in the form of rules. During the analysis, signs were identified that require quantization, since individual intervals of values ??turned out to be associated with risk in different ways. The developed mathematical model is implemented in the form of software in a high-level programming language, the information of the model is stored in a database management system and is integrated with an information system for supporting management decisions when operating vehicles on natural gas. The developed athematical model was tested on a test training sample. The test results showed a satisfactory accuracy of the proposed model at the level of 77 % without the use of expert restrictions and 79 % with their use. At the same time, the share of Type II errors was 2.7 %, and Type I errors were 7.2 %, which indicates that the model is quite conservative, and a relatively high proportion of vehicles that meet the requirements were rejected.

2015 ◽  
Vol 1 (2) ◽  
pp. 115
Author(s):  
Samih Antoine Azar ◽  
Marybel Nasr

This study examines the ability of financial ratios in predicting the financial state of small and medium entities (SME) in Lebanon. This financial state can be either one of well-performing loans or one of non-performing loans. An empirical study is conducted using a data analysis of the financial statements of 222 SMEs in Lebanon for the years 2011 and 2012, of which 187 have currently well-performing loans and 35 have currently non-performing loans. Altman Z-scores are calculated, independent samples t-tests are performed, and models are developed using the binary logistic regression. Empirical evidence shows that the Altman Z-scores are able to predict well the solvent state of SMEs having well-performing loans, but are unable to predict accurately the bankruptcy state of the SMEs having non-performing loans. The independent samples t-tests revealed that five financial ratios are statistically significantly different between SMEs having well-performing loans and those having non-performing loans. Finally, a logistic regression model is developed for each year under study with limited success. In all cases accuracy results are inferred showing the percentage of companies that are accurately classified for being solvent and bankrupt, in addition to the two standard measures of error: the Type I errors and the Type II errors. Although a high accuracy is achieved in correctly classifying non-distressed and distressed firms, the Type I errors are in general relatively large. By contrast the Type II errors are in general relatively low.


2018 ◽  
Vol 7 (10) ◽  
pp. 409 ◽  
Author(s):  
Youqiang Dong ◽  
Ximin Cui ◽  
Li Zhang ◽  
Haibin Ai

The progressive TIN (triangular irregular network) densification (PTD) filter algorithm is widely used for filtering point clouds. In the PTD algorithm, the iterative densification parameters become smaller over the entire process of filtering. This leads to the performance—especially the type I errors of the PTD algorithm—being poor for point clouds with high density and standard variance. Hence, an improved PTD filtering algorithm for point clouds with high density and variance is proposed in this paper. This improved PTD method divides the iterative densification process into two stages. In the first stage, the iterative densification process of the PTD algorithm is used, and the two densification parameters become smaller. When the density of points belonging to the TIN is higher than a certain value (in this paper, we define this density as the standard variance intervention density), the iterative densification process moves into the second stage. In the second stage, a new iterative densification strategy based on multi-scales is proposed, and the angle threshold becomes larger. The experimental results show that the improved PTD algorithm can effectively reduce the type I errors and total errors of the DIM point clouds by 7.53% and 4.09%, respectively, compared with the PTD algorithm. Although the type II errors increase slightly in our improved method, the wrongly added objective points have little effect on the accuracy of the generated DSM. In short, our improved PTD method perfects the classical PTD method and offers a better solution for filtering point clouds with high density and standard variance.


Risks ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 200
Author(s):  
Youssef Zizi ◽  
Amine Jamali-Alaoui ◽  
Badreddine El Goumi ◽  
Mohamed Oudgou ◽  
Abdeslam El Moudden

In the face of rising defaults and limited studies on the prediction of financial distress in Morocco, this article aims to determine the most relevant predictors of financial distress and identify its optimal prediction models in a normal Moroccan economic context over two years. To achieve these objectives, logistic regression and neural networks are used based on financial ratios selected by lasso and stepwise techniques. Our empirical results highlight the significant role of predictors, namely interest to sales and return on assets in predicting financial distress. The results show that logistic regression models obtained by stepwise selection outperform the other models with an overall accuracy of 93.33% two years before financial distress and 95.00% one year prior to financial distress. Results also show that our models classify distressed SMEs better than healthy SMEs with type I errors lower than type II errors.


2019 ◽  
Vol 8 (4) ◽  
pp. 1849-1853

Nowadays people are interested to avail loans in banks for their needs, but providing loans to all people is not possible to banks, so they are using some measures to identify eligible customers. To measure the performance of categorical variables sensitivity and specificity are widely used in Medical and tangentially in econometrics, after using some measures also if banks provide the loans to the wrong customers whom might not able to repay the loans, and not providing to customers who can repay will lead to the type I errors and type II errors, to minimize these errors, this study explains one, how to know sensitivity is large or small and second to study the bench marks on forecasting the model by Fuzzy analysis based on fuzzy based weights and it is compared with the sensitivity analysis.


2019 ◽  
Vol 100 (10) ◽  
pp. 1987-2007 ◽  
Author(s):  
Thomas Knutson ◽  
Suzana J. Camargo ◽  
Johnny C. L. Chan ◽  
Kerry Emanuel ◽  
Chang-Hoi Ho ◽  
...  

AbstractAn assessment was made of whether detectable changes in tropical cyclone (TC) activity are identifiable in observations and whether any changes can be attributed to anthropogenic climate change. Overall, historical data suggest detectable TC activity changes in some regions associated with TC track changes, while data quality and quantity issues create greater challenges for analyses based on TC intensity and frequency. A number of specific published conclusions (case studies) about possible detectable anthropogenic influence on TCs were assessed using the conventional approach of preferentially avoiding type I errors (i.e., overstating anthropogenic influence or detection). We conclude there is at least low to medium confidence that the observed poleward migration of the latitude of maximum intensity in the western North Pacific is detectable, or highly unusual compared to expected natural variability. Opinion on the author team was divided on whether any observed TC changes demonstrate discernible anthropogenic influence, or whether any other observed changes represent detectable changes. The issue was then reframed by assessing evidence for detectable anthropogenic influence while seeking to reduce the chance of type II errors (i.e., missing or understating anthropogenic influence or detection). For this purpose, we used a much weaker “balance of evidence” criterion for assessment. This leads to a number of more speculative TC detection and/or attribution statements, which we recognize have substantial potential for being false alarms (i.e., overstating anthropogenic influence or detection) but which may be useful for risk assessment. Several examples of these alternative statements, derived using this approach, are presented in the report.


1990 ◽  
Vol 15 (3) ◽  
pp. 237-247 ◽  
Author(s):  
Rand R. Wilcox

Let X and Y be dependent random variables with variances σ2x and σ2y. Recently, McCulloch (1987) suggested a modification of the Morgan-Pitman test of Ho: σ2x=σ2y But, as this paper describes, there are situations where McCulloch’s procedure is not robust. A subsample approach, similar to the Box-Scheffe test, is also considered and found to give conservative results, in terms of Type I errors, for all situations considered, but it yields relatively low power. New results on the Sandvik-Olsson procedure are also described, but the procedure is found to be nonrobust in situations not previously considered, and its power can be low relative to the two other techniques considered here. A modification of the Morgan-Pitman test based on the modified maximum likelihood estimate of a correlation is also considered. This last procedure appears to be robust in situations where the Sandvik-Olsson (1982) and McCulloch procedures are robust, and it can have more power than the Sandvik-Olsson. But it too gives unsatisfactory results in certain situations. Thus, in terms of power, McCulloch’s procedure is found to be best, with the advantage of being simple to use. But, it is concluded that, in terms of controlling both Type I and Type II errors, a satisfactory solution does not yet exist.


1993 ◽  
Vol 76 (2) ◽  
pp. 407-412 ◽  
Author(s):  
Donald W. Zimmerman

This study investigated violations of random sampling and random assignment in data analyzed by nonparametric significance tests. A computer program induced correlations within groups, as well as between groups, and performed one-sample and two-sample versions of the Mann-Whitney-Wilcoxon test on the resulting scores. Nonindependence of observations within groups spuriously inflated the probability of Type I errors and depressed the probability of Type II errors, and nonindependence between groups had the reverse effect. This outcome, which parallels the influence of nonindependence on parametric tests, can be explained by the equivalence of the Mann-Whitney-Wilcoxon test and the Student t test performed on ranks replacing the initial scores.


2021 ◽  
Author(s):  
Antonia Vehlen ◽  
William Standard ◽  
Gregor Domes

Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer’s test conditions, validation is essential with regard to data quality and other factors potentially threatening data validity. In this study, we evaluated the impact of data accuracy and areas of interest (AOIs) size on the classification of simulated gaze data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying data accuracy. As hypothesized, we found that data accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed for falsely classified gaze inside AOIs (Type I errors) and falsely classified gaze outside the predefined AOIs (Type II errors). The results indicate that smaller AOIs generally minimize false classifications as long as data accuracy is good enough. For studies with lower data accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of an increased probability of Type I errors. Proper estimation of data accuracy is therefore essential for making informed decisions regarding the size of AOIs.


2017 ◽  
Author(s):  
Torrin Liddell ◽  
John K. Kruschke

We surveyed all articles in the Journal of Personality and Social Psychology (JPSP), Psychological Science (PS), and the Journal of Experimental Psychology: General (JEP:G) that mentioned the term "Likert," and found that 100% of the articles that analyzed ordinal data did so using a metric model. We present novel evidence that analyzing ordinal data as if they were metric can systematically lead to errors. We demonstrate false alarms (i.e., detecting an effect where none exists, Type~I errors) and failures to detect effects (i.e., loss of power, Type II errors). We demonstrate systematic inversions of effects, for which treating ordinal data as metric indicates the opposite ordering of means than the true ordering of means. We show the same problems --- false alarms, misses, and inversions --- for interactions in factorial designs and for trend analyses in regression. We demonstrate that averaging across multiple ordinal measurements does not solve or even ameliorate these problems. We provide simple graphical explanations of why these mistakes occur. Moreover, we point out that there is no sure-fire way to detect these problems by treating the ordinal values as metric, and instead we advocate use of ordered-probit models (or similar) because they will better describe the data. Finally, although frequentist approaches to some ordered-probit models are available, we use Bayesian methods because of their flexibility in specifying models and their richness and accuracy in providing parameter estimates.


1992 ◽  
Vol 75 (3) ◽  
pp. 1011-1020 ◽  
Author(s):  
Donald W. Zimmerman ◽  
Richard H. Williams ◽  
Bruno D. Zumbo

A computer-simulation study examined the one-sample Student t test under violation of the assumption of independent sample observations. The probability of Type I errors increased, and the probability of Type II errors decreased, spuriously elevating the entire power function. The magnitude of the change depended on the correlation between pairs of sample values as well as the number of sample values that were pairwise correlated. A modified t statistic, derived from an unbiased estimate of the population variance that assumed only exchangeable random variables instead of independent, identically distributed random variables, effectively corrected for nonindependence for all degrees of correlation and restored the probability of Type I and Type II errors to their usual values.


Sign in / Sign up

Export Citation Format

Share Document