Use of Data Reduction Process to Bankruptcy Prediction

Author(s):  
Morteza Shafiee Sardasht ◽  
Saeed Saheb

Predicting corporate bankruptcy has been an important challenging problem in research topic in accounting and finance. In bankruptcy prediction, researchers often confront a range of observations and variables which are often vast amount of financial ratios. By reducing variables and select relevant data from a given dataset, data reduction process can optimize bankruptcy prediction. This study addresses four well-known data reduction methods including t-test, correlation analysis, principal component analysis (PCA) and factor analysis (FA) and evaluated them in bankruptcy prediction in the Tehran Stock Exchange (TSE). To this end, considering 35 financial ratios, the results of data reduction methods were separately used to train Support Vector Machine (SVM) as the powerful prediction model. Regarding the empirical results, among the aforementioned methods, the t-test lead to the most prediction rate with 97.1% of predictability and PCA by 95.1% provides the next position.

2018 ◽  
pp. 956-979
Author(s):  
Morteza Shafiee Sardasht ◽  
Saeed Saheb

Predicting corporate bankruptcy has been an important challenging problem in research topic in accounting and finance. In bankruptcy prediction, researchers often confront a range of observations and variables which are often vast amount of financial ratios. By reducing variables and select relevant data from a given dataset, data reduction process can optimize bankruptcy prediction. This study addresses four well-known data reduction methods including t-test, correlation analysis, principal component analysis (PCA) and factor analysis (FA) and evaluated them in bankruptcy prediction in the Tehran Stock Exchange (TSE). To this end, considering 35 financial ratios, the results of data reduction methods were separately used to train Support Vector Machine (SVM) as the powerful prediction model. Regarding the empirical results, among the aforementioned methods, the t-test lead to the most prediction rate with 97.1% of predictability and PCA by 95.1% provides the next position.


Author(s):  
G. A. Rekha Pai ◽  
G. A. Vijayalakshmi Pai

Industrial bankruptcy is a rampant problem which does not occur overnight and when it occurs can cause acute financial embarrassment to Governments and financial institutions as well as threaten the very viability of the firms. It is therefore essential to help industries identify the impending trouble early. Several statistical and soft computing based bankruptcy prediction models that make use of financial ratios as indicators have been proposed. Majority of these models make use of a selective set of financial ratios chosen according to some appropriate criteria framed by the individual investigators. In contrast, this study considers any number of financial ratios irrespective of the industrial category and size and makes use of Principal Component Analysis to extract their principal components, to be used as predictors, thereby dispensing with the cumbersome selection procedures used by its predecessors. An Evolutionary Neural Network (ENN) and a Backpropagation Neural Network with Levenberg Marquardt’s training rule (BPN) have been employed as classifiers and their performance has been compared using Receiver Operating Characteristics (ROC) analyses. Termed PCA-ENN and PCA-BPN models, the predictive potential of the two models have been analyzed over a financial database (1997-2000) pertaining to 34 sick and 38 non sick Indian manufacturing companies, with 21 financial ratios as predictor variables.


1984 ◽  
Vol 78 ◽  
pp. 197-202
Author(s):  
J-Y. Le Gall ◽  
M. Saisse

AbstractOne presents hereafter the HIPPARCOS satellite payload which is mainly constituted by a Schmidt telescope; a possible way to approximate the Schmidt mirror elliptic deformation profile is explained. Then, the signal expected from the optical chain is briefly described and one displays a residual chromatic effect which may introduce errors in the measure. To conclude, numerical values of this effect are given and one shows the necessity to take it into account in the data reduction process.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Luis López-Martín

The reduction of integral field spectroscopy (IFS) data requires several stages and many repetitive operations to convert raw data into, typically, a large number of spectra. Instead there are several semiautomatic data reduction tools and here we present this data reduction process using some of the Image Reduction and Analysis Facility (IRAF) tasks devoted to reduce spectroscopic data. After explaining the whole process, we illustrate the power of this instrumental technique with some results obtained for the object HH202 in the Orion Nebula (Mesa-Delgado et al., 2009).


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Zahra Moghaddasi ◽  
Hamid A. Jalab ◽  
Rafidah Md Noor ◽  
Saeed Aghabozorgi

Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images.


1988 ◽  
Vol 133 ◽  
pp. 265-268 ◽  
Author(s):  
Daniel Egret ◽  
Erik Høg

The real-time attitude of the HIPPARCOS satellite (see Turon, this symposium) is monitored by a star mapper viewing a 40 arcmin. Wide band of the sky, ahead of the main field of view, through a system of four vertical and four inclined slits (fig. 1). The resulting on-board precision will be 1 arcsec. rms and on-ground attitude reconstitution will later improve this value to 0.1 arcsec. rms.The TYCHO project (approved by ESA in 1981) consisted in introducing dichroic beam splitters and a pair of redundant photomultipliers (with a bandpass close to Johnson B and V) into the science payload. This configuration will give a complete survey of the sky down to 11th magnitude in B, during the two and a half years of the mission. The data reduction is prepared by a scientific consortium (TDAC) which will be described in more details in section 4. The work related to the TYCHO Input Catalogue is described in section 2. An overview of the data reduction process is given as section 3, and aspects of the TYCHO outputs are discussed in section 5.


Sign in / Sign up

Export Citation Format

Share Document