On the fit and forecasting performance of grey prediction models for projecting educational attainment

Kybernetes ◽  
2016 ◽  
Vol 45 (9) ◽  
pp. 1387-1405 ◽  
Author(s):  
Hui-Wen Vivian Tang ◽  
Tzu-chin Rojoice Chou

Purpose The purpose of this paper is to evaluate the forecasting performance of grey prediction models on educational attainment vis-à-vis that of exponential smoothing combined with multiple linear regression employed by the National Center for Education Statistics (NCES). Design/methodology/approach An out-of-sample forecasting experiment was carried out to compare the forecasting performances on educational attainments among GM(1,1), GM(1,1) rolling, FGM(1,1) derived from the grey system theory and exponential smoothing prediction combined with multivariate regression. The predictive power of each model was measured based on MAD, MAPE, RMSE and simple F-test of equal variance. Findings The forecasting efficiency evaluated by MAD, MAPE, RMSE and simple F-test of equal variance revealed that the GM(1,1) rolling model displays promise for use in forecasting educational attainment. Research limitations/implications Since the possible inadequacy of MAD, MAPE, RMSE and F-type test of equal variance was documented in the literature, further large-scale forecasting comparison studies may be done to test the prediction powers of grey prediction and its competing out-of-sample forecasts by other alternative measures of accuracy. Practical implications The findings of this study would be useful for NCES and professional forecasters who are expected to provide government authorities and education policy makers with accurate information for planning future policy directions and optimizing decision-making. Originality/value As a continuing effort to evaluate the forecasting efficiency of grey prediction models, the present study provided accumulated evidence for the predictive power of grey prediction on short-term forecasts of educational statistics.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yi-Chung Hu ◽  
Peng Jiang ◽  
Hang Jiang ◽  
Jung-Fa Tsai

PurposeIn the face of complex and challenging economic and business environments, developing and implementing approaches to predict bankruptcy has become important for firms. Bankruptcy prediction can be regarded as a grey system problem because while factors such as the liquidity, solvency and profitability of a firm influence whether it goes bankrupt, the precise manner in which these factors influence the discrimination between failed and non-failed firms is uncertain. In view of the applicability of multivariate grey prediction models (MGPMs), this paper aimed to develop a grey bankruptcy prediction model (GBPM) based on the GM (1, N) (BP-GM (1, N)).Design/methodology/approachAs the traditional GM (1, N) is designed for time series forecasting, it is better to find an appropriate permutation of firms in the financial data as if the resulting sequences are time series. To solve this challenging problem, this paper proposes GBPMs by integrating genetic algorithms (GAs) into the GM (1, N).FindingsExperimental results obtained for the financial data of Taiwanese firms in the information technology industries demonstrated that the proposed BP-GM (1, N) performs well.Practical implicationsAmong artificial intelligence (AI)-based techniques, GBPMs are capable of explaining which of the financial ratios has a stronger impact on bankruptcy prediction by driving coefficients.Originality/valueApplying MGPMs to a problem without relation to time series is challenging. This paper focused on bankruptcy prediction, a crucial issue in financial decision-making for businesses, and proposed several GBPMs.


2015 ◽  
Vol 5 (2) ◽  
pp. 178-193 ◽  
Author(s):  
R.M. Kapila Tharanga Rathnayaka ◽  
D.M.K.N Seneviratna ◽  
Wei Jianguo

Purpose – Making decisions in finance have been regarded as one of the biggest challenges in the modern economy today; especially, analysing and forecasting unstable data patterns with limited sample observations under the numerous economic policies and reforms. The purpose of this paper is to propose suitable forecasting approach based on grey methods in short-term predictions. Design/methodology/approach – High volatile fluctuations with instability patterns are the common phenomenon in the Colombo Stock Exchange (CSE), Sri Lanka. As a subset of the literature, very few studies have been focused to find the short-term forecastings in CSE. So, the current study mainly attempted to understand the trends and suitable forecasting model in order to predict the future behaviours in CSE during the period from October 2014 to March 2015. As a result of non-stationary behavioural patterns over the period of time, the grey operational models namely GM(1,1), GM(2,1), grey Verhulst and non-linear grey Bernoulli model were used as a comparison purpose. Findings – The results disclosed that, grey prediction models generate smaller forecasting errors than traditional time series approach for limited data forecastings. Practical implications – Finally, the authors strongly believed that, it could be better to use the improved grey hybrid methodology algorithms in real world model approaches. Originality/value – However, for the large sample of data forecasting under the normality assumptions, the traditional time series methodologies are more suitable than grey methodologies; especially GM(1,1) give some dramatically unsuccessful results than auto regressive intergrated moving average in model pre-post stage.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhiming Hu ◽  
Chong Liu

Grey prediction models have been widely used in various fields of society due to their high prediction accuracy; accordingly, there exists a vast majority of grey models for equidistant sequences; however, limited research is focusing on nonequidistant sequence. The development of nonequidistant grey prediction models is very slow due to their complex modeling mechanism. In order to further expand the grey system theory, a new nonequidistant grey prediction model is established in this paper. To further improve the prediction accuracy of the NEGM (1, 1, t2) model, the background values of the improved nonequidistant grey model are optimized based on Simpson formula, which is abbreviated as INEGM (1, 1, t2). Meanwhile, to verify the validity of the proposed model, this model is applied in two real-world cases in comparison with three other benchmark models, and the modeling results are evaluated through several commonly used indicators. The results of two cases show that the INEGM (1, 1, t2) model has the best prediction performance among these competitive models.


2018 ◽  
Author(s):  
A.G. Allegrini ◽  
S. Selzam ◽  
K. Rimfeld ◽  
S. von Stumm ◽  
J.B. Pingault ◽  
...  

AbstractRecent advances in genomics are producing powerful DNA predictors of complex traits, especially cognitive abilities. Here, we leveraged summary statistics from the most recent genome-wide association studies of intelligence and educational attainment to build prediction models of general cognitive ability and educational achievement. To this end, we compared the performances of multi-trait genomic and polygenic scoring methods. In a representative UK sample of 7,026 children at age 12 and 16, we show that we can now predict up to 11 percent of the variance in intelligence and 16 percent in educational achievement. We also show that predictive power increases from age 12 to age 16 and that genomic predictions do not differ for girls and boys. Multivariate genomic methods were effective in boosting predictive power and, even though prediction accuracy varied across polygenic scores approaches, results were similar using different multivariate and polygenic score methods. Polygenic scores for educational attainment and intelligence are the most powerful predictors in the behavioural sciences and exceed predictions that can be made from parental phenotypes such as educational attainment and occupational status.


2015 ◽  
Vol 5 (1) ◽  
pp. 41-53 ◽  
Author(s):  
Tianxiang Yao ◽  
Wenrong Cheng

Purpose – The purpose of this paper is to find a method that has high precision to forecast the energy consumption of China’s manufacturing industry. The authors hope the predicted data can provide references to the formulation of government’s energy strategy and the sustained growth of economy in China. Design/methodology/approach – First, the authors respectively make use of regression prediction model and grey system theory GM(1,1) model to construct single model based the data of 2001-2010, analyze the advantages and disadvantages of single prediction models. The authors use the data of 2011 and 2012 to test the model. Second, the authors propose combination forecasting model of manufacturing’s energy consumption in China by using standard variance to allocate the weight. Finally, this model is applied to forecast China’s manufacturing energy consumption during 2013-2016. Findings – The result shows that the combination model is a better one with higher accuracy; the authors can take the model as an effective tool to predict manufacturing’s energy consumption in China. And the energy consumption of China’s manufacturing industry continued to show a steady incremental trend. Originality/value – This method takes full advantages of the effective information reflected by the single model and improves the prediction accuracy.


2016 ◽  
Vol 6 (1) ◽  
pp. 80-95 ◽  
Author(s):  
Wei Meng ◽  
Qian Li ◽  
Bo Zeng

Purpose – The purpose of this paper is to derive the analytical expression of fractional order reducing generation operator (or inverse accumulating generating operation) and study its properties. Design/methodology/approach – This disaggregation method includes three main steps. First, by utilizing Gamma function expanded for integer factorial, this paper expands one order reducing generation operator into integer order reducing generation operator and fractional order reducing generation operator, and gives the analytical expression of fractional order reducing generation operator. Then, studies the commutative law and exponential law of fractional order reducing generation operator. Lastly, gives several examples of fractional order reducing generation operator and verifies the commutative law and exponential law of fractional order reducing generation operator. Findings – The authors pull the analytical expression of fractional order reducing generation operator and verify that fractional order reducing generation operator satisfies commutative law and exponential law. Practical implications – Expanding the reducing generation operator would help develop grey prediction model with fractional order operators and widen the application fields of grey prediction models. Originality/value – The analytical expression of fractional order reducing generation operator, properties of commutative law and exponential law for fractional order reducing generation operator are first studied.


2018 ◽  
Vol 17 (4) ◽  
pp. 482-497
Author(s):  
Rangga Handika ◽  
Dony Abdul Chalid

Purpose This paper aims to investigate whether the best statistical model also corresponds to the best empirical performance in the volatility modeling of financialized commodity markets. Design/methodology/approach The authors use various p and q values in Value-at-Risk (VaR) GARCH(p, q) estimation and perform backtesting at different confidence levels, different out-of-sample periods and different data frequencies for eight financialized commodities. Findings They find that the best fitted GARCH(p,q) model tends to generate the best empirical performance for most financialized commodities. Their findings are consistent at different confidence levels and different out-of-sample periods. However, the strong results occur for both daily and weekly returns series. They obtain weak results for the monthly series. Research limitations/implications Their research method is limited to the GARCH(p,q) model and the eight discussed financialized commodities. Practical implications They conclude that they should continue to rely on the log-likelihood statistical criteria for choosing a GARCH(p,q) model in financialized commodity markets for daily and weekly forecasting horizons. Social implications The log-likelihood statistical criterion has strong predictive power in GARCH high-frequency data series (daily and weekly). This finding justifies the importance of using statistical criterion in financial market modeling. Originality/value First, this paper investigates whether the best statistical model corresponds to the best empirical performance. Second, this paper provides an indirect test for evaluating the accuracy of volatility modeling by using the VaR approach.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shi Quan Jiang ◽  
SiFeng Liu ◽  
ZhongXia Liu

PurposeThe purpose of this paper is to study the grey decision model and distance measuring method of general grey number.Design/methodology/approachFirst, intuitionistic grey number (IGN) set and an IGN are defined by grey number probability function. Second, each interval grey number in general grey number is represented by an IGN and converts the general grey number into an IGN set. Final, the operation of two general grey numbers is defined as the operation between IGN sets, and the distance measure of the general grey number is given.FindingsUp to now, the method of measuring the distance and the grey decision model of general grey number is established. Thus, the difficult problem for set up decision mode of general grey number has been solved to a certain degree.Research limitations/implicationsThe method exposed in this paper can be used to integrate information form a different source. The method that a general grey number converted to a set of IGNs could be extended to the case of grey incidence analysis models, grey prediction models and grey clustering evaluation models, which includes general grey numbers, etc.Originality/valueThe concepts of IGN and IGN set are proposed for the first time in this paper; The operation of two general grey numbers can be defined as the operation between IGN sets. On this basis, the algorithm of IGN, the integration operator of IGN and the distance measure between IGN sets are given.


2017 ◽  
Vol 43 (7) ◽  
pp. 774-793
Author(s):  
Walid Ben Omrane ◽  
Chao He ◽  
Zhongzhi Lawrence He ◽  
Samir Trabelsi

Purpose Forecasting the future movement of yield curves contains valuable information for both academic and practical issues such as bonding pricing, portfolio management, and government policies. The purpose of this paper is to develop a dynamic factor approach that can provide more precise and consistent forecasting results under various yield curve dynamics. Design/methodology/approach The paper develops a unified dynamic factor model based on Diebold and Li (2006) and Nelson and Siegel (1987) three-factor model to forecast the future movement yield curves. The authors apply the state-space model and the Kalman filter to estimate parameters and extract factors from the US yield curve data. Findings The authors compare both in-sample and out-of-sample performance of the dynamic approach with various existing models in the literature, and find that the dynamic factor model produces the best in-sample fit, and it dominates existing models in medium- and long-horizon yield curve forecasting performance. Research limitations/implications The authors find that the dynamic factor model and the Kalman filter technique should be used with caution when forecasting short maturity yields on a short time horizon, in which the Kalman filter is prone to trade off out-of-sample robustness to maintain its in-sample efficiency. Practical implications Bond analysts and portfolio managers can use the dynamic approach to do a more accurate forecast of yield curve movements. Social implications The enhanced forecasting approach also equips the government with a valuable tool in setting macroeconomic policies. Originality/value The dynamic factor approach is original in capturing the level, slope, and curvature of yield curves in that the decay rate is set as a free parameter to be estimated from yield curve data, instead of setting it to be a fixed rate as in the existing literature. The difference range of estimated decay rate provides richer yield curve dynamics and is the key to stronger forecasting performance.


Sign in / Sign up

Export Citation Format

Share Document