A neural network extension of the Lee–Carter model to multiple populations

2019 ◽  
pp. 1-21 ◽  
Author(s):  
Ronald Richman ◽  
Mario V. Wüthrich

Abstract The Lee–Carter (LC) model is a basic approach to forecasting mortality rates of a single population. Although extensions of the LC model to forecasting rates for multiple populations have recently been proposed, the structure of these extended models is hard to justify and the models are often difficult to calibrate, relying on customised optimisation schemes. Based on the paradigm of representation learning, we extend the LCmodel to multiple populations using neural networks, which automatically select an optimal model structure. We fit this model to mortality rates since 1950 for all countries in the Human Mortality Database and observe that the out-of-sample forecasting performance of the model is highly competitive.

Author(s):  
Gerit Vogt

SummaryIn recent years some papers have been published that deal with the forecasting performance of indicators for the German economy. The real-time aspect, however, was largely neglected. This article analyses the information content of some ifo indicators (the business climate index for the manufacturing sector and its components, the current business situation and business expectations) to predict the German index of production. The analysis is based on cross correlations, Granger causality tests and different out-of-sample forecasts, generated by subset VAR models. First, the out-of-sample forecasts are made, as in conventional studies, with the latest available data and fixed model structure. Afterwards, the out-of-sample indicator properties are analysed in real-time, i.e. with real-time data and variable model structure. In general the indicator properties become worse under real-time conditions. The indicator-based VAR models are not able to beat the forecast performance of a pure autoregressive model for forecast horizons of one and three month. But for forecast horizons of six, nine and twelve months, the indicators seem to be useful in predicting the index of production.


Author(s):  
Ana Debón ◽  
Steven Haberman ◽  
Francisco Montes ◽  
Edoardo Otranto

The parametric model introduced by Lee and Carter in 1992 for modeling mortality rates in the USA was a seminal development in forecasting life expectancies and has been widely used since then. Different extensions of this model, using different hypotheses about the data, constraints on the parameters, and appropriate methods have led to improvements in the model’s fit to historical data and the model’s forecasting of the future. This paper’s main objective is to evaluate if differences between models are reflected in different mortality indicators’ forecasts. To this end, nine sets of indicator predictions were generated by crossing three models and three block-bootstrap samples with each of size fifty. Later the predicted mortality indicators were compared using functional ANOVA. Models and block bootstrap procedures are applied to Spanish mortality data. Results show model, block-bootstrap, and interaction effects for all mortality indicators. Although it was not our main objective, it is essential to point out that the sample effect should not be present since they must be realizations of the same population, and therefore the procedure should lead to samples that do not influence the results. Regarding significant model effect, it follows that, although the addition of terms improves the adjustment of probabilities and translates into an effect on mortality indicators, the model’s predictions must be checked in terms of their probabilities and the mortality indicators of interest.


2012 ◽  
Vol 50 (1) ◽  
pp. 85-93 ◽  
Author(s):  
Rosella Giacometti ◽  
Marida Bertocchi ◽  
Svetlozar T. Rachev ◽  
Frank J. Fabozzi

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245904
Author(s):  
Viviane Naimy ◽  
Omar Haddad ◽  
Gema Fernández-Avilés ◽  
Rim El Khoury

This paper provides a thorough overview and further clarification surrounding the volatility behavior of the major six cryptocurrencies (Bitcoin, Ripple, Litecoin, Monero, Dash and Dogecoin) with respect to world currencies (Euro, British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen), the relative performance of diverse GARCH-type specifications namely the SGARCH, IGARCH (1,1), EGARCH (1,1), GJR-GARCH (1,1), APARCH (1,1), TGARCH (1,1) and CGARCH (1,1), and the forecasting performance of the Value at Risk measure. The sampled period extends from October 13th 2015 till November 18th 2019. The findings evidenced the superiority of the IGARCH model, in both the in-sample and the out-of-sample contexts, when it deals with forecasting the volatility of world currencies, namely the British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen. The CGARCH alternative modeled the Euro almost perfectly during both periods. Advanced GARCH models better depicted asymmetries in cryptocurrencies’ volatility and revealed persistence and “intensifying” levels in their volatility. The IGARCH was the best performing model for Monero. As for the remaining cryptocurrencies, the GJR-GARCH model proved to be superior during the in-sample period while the CGARCH and TGARCH specifications were the optimal ones in the out-of-sample interval. The VaR forecasting performance is enhanced with the use of the asymmetric GARCH models. The VaR results provided a very accurate measure in determining the level of downside risk exposing the selected exchange currencies at all confidence levels. However, the outcomes were far from being uniform for the selected cryptocurrencies: convincing for Dash and Dogcoin, acceptable for Litecoin and Monero and unconvincing for Bitcoin and Ripple, where the (optimal) model was not rejected only at the 99% confidence level.


2008 ◽  
Vol 13 (1) ◽  
pp. 57-85 ◽  
Author(s):  
Falak Sher ◽  
Eatzaz Ahmad

This study analyzes the future prospects of wheat production in Pakistan. Parameters of the forecasting model are obtained by estimating a Cobb-Douglas production function for wheat, while future values of various inputs are obtained as dynamic forecasts on the basis of separate ARIMA estimates for each input and for each province. Input forecasts and parameters of the wheat production function are then used to generate wheat forecasts. The results of the study show that the most important variables for predicting wheat production per hectare (in order of importance) are: lagged output, labor force, use of tractors, and sum of the rainfall in the months of November to March. The null hypotheses of common coefficients across provinces for most of the variables cannot be rejected, implying that all variables play the same role in wheat production in all the four provinces. Forecasting performance of the model based on out-of-sample forecasts for the period 2005-06 is highly satisfactory with 1.81% mean absolute error. The future forecasts for the period of 2007-15 show steady growth of 1.6%, indicating that Pakistan will face a slight shortage of wheat output in the future.


Author(s):  
Błażej Mazur ◽  
Mateusz Pipień

Abstract We demonstrate that analysis of long series of daily returns should take into account potential long-term variation not only in volatility, but also in parameters that describe asymmetry or tail behaviour. However, it is necessary to use a conditional distribution that is flexible enough, allowing for separate modelling of tail asymmetry and skewness, which requires going beyond the skew-t form. Empirical analysis of 60 years of S&P500 daily returns suggests evidence for tail asymmetry (but not for skewness). Moreover, tail thickness and tail asymmetry is not time-invariant. Tail asymmetry became much stronger at the beginning of the Great Moderation period and weakened after 2005, indicating important differences between the 1987 and the 2008 crashes. This is confirmed by our analysis of out-of-sample density forecasting performance (using LPS and CRPS measures) within two recursive expanding-window experiments covering the events. We also demonstrate consequences of accounting for long-term changes in shape features for risk assessment.


2018 ◽  
Vol 11 (4) ◽  
pp. 84 ◽  
Author(s):  
Naseem Al Rahahleh ◽  
Robert Kao

The purpose of this paper is to evaluate the forecasting performance of linear and non-linear generalized autoregressive conditional heteroskedasticity (GARCH)–class models in terms of their in-sample and out-of-sample forecasting accuracy for the Tadawul All Share Index (TASI) and the Tadawul Industrial Petrochemical Industries Share Index (TIPISI) for petrochemical industries. We use the daily price data of the TASI and the TIPISI for the period of 10 September 2007 to 26 February 2015. The results suggest that the Asymmetric Power of ARCH (APARCH) model is the most accurate model in the GARCH class for forecasting the volatility of both the TASI and the TIPISI in the context of petrochemical industries, as this model outperforms the other models in model estimation and daily out-of-sample volatility forecasting of the two indices. This study is useful for the dataset examined, because the results provide a basis for traders, policy-makers, and international investors to make decisions using this model to forecast the risks associated with investing in the Saudi stock market, within certain limitations.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Semin Chun ◽  
Tae-Hyoung Kim

In this study, a novel easy-to-use meta-heuristic method for simultaneous identification of model structure and the associated parameters for linear systems is developed. This is achieved via a constrained multidimensional particle swarm optimization (PSO) mechanism developed by hybridizing two main methodologies: one for negating the limit for fixing the particle’s dimensions within the PSO process and another for enhancing the exploration ability of the particles by adopting a cyclic neighborhood topology of the swarm. This optimizer consecutively searches the dimensional optimum of particles and then the positional optimum in the search space, whose dimension is specified by the explored optimal dimension. The dimensional optimum provides the optimal model structure, while the positional optimum provides the optimal model parameters. Typical numerical examples are considered for evaluation purposes, which clearly demonstrate that the proposed PSO scheme provides novel and powerful impetus with remarkable reliability toward simultaneous identification of model structure and unknown model parameters. Furthermore, identification experiments are conducted on a magnetic levitation system and a robotic manipulator with joint flexibility to demonstrate the effectiveness of the proposed strategy in practical applications.


Sign in / Sign up

Export Citation Format

Share Document