scholarly journals Social Security Benefit Valuation, Risk, and Optimal Retirement

Risks ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 124
Author(s):  
Yassmin Ali ◽  
Ming Fang ◽  
Pablo A. Arrutia Sota ◽  
Stephen Taylor ◽  
Xun Wang

We develop valuation and risk techniques for the future benefits of a retiree who participates in the American Social Security program based on their chosen date of retirement, the term structure of interest rates, and forecasted life expectancy. These valuation methods are then used to determine the optimal retirement time of a beneficiary given a specific wage history and health profile in the sense of maximizing the present value of cash flows received during retirement years. We then examine how a number of risk factors including interest rates, disease diagnosis, and mortality risks impact benefit value. Specifically, we utilize principal component analysis in order to assess both interest rate and mortality risk. We then conduct numerical studies to examine how such risks range over distinct income and demographic groups and finally summarize future research directions.

2009 ◽  
Vol 12 (06) ◽  
pp. 811-832 ◽  
Author(s):  
PILAR ABAD ◽  
SONIA BENITO

This work compares the accuracy of different measures of Value at Risk (VaR) of fixed income portfolios calculated on the basis of different multi-factor empirical models of the term structure of interest rates (TSIR). There are three models included in the comparison: (1) regression models, (2) principal component models, and (3) parametric models. In addition, the cartography system used by Riskmetrics is included. Since calculation of a VaR estimate with any of these models requires the use of a volatility measurement, this work uses three types of measurements: exponential moving averages, equal weight moving averages, and GARCH models. Consequently, the comparison of the accuracy of VaR estimates has two dimensions: the multi-factor model and the volatility measurement. With respect to multi-factor models, the presented evidence indicates that the Riskmetrics model or cartography system is the most accurate model when VaR estimates are calculated at a 5% confidence level. On the contrary, at a 1% confidence level, the parametric model (Nelson and Siegel model) is the one that yields more accurate VaR estimates. With respect to the volatility measurements, the results indicate that, as a general rule, no measurement works systematically better than the rest. All the results obtained are independent of the time horizon for which VaR is calculated, i.e. either one or ten days.


Kybernetes ◽  
2017 ◽  
Vol 46 (4) ◽  
pp. 621-637
Author(s):  
Tanja Salamon ◽  
Borut Milfelner ◽  
Jernej Belak

Purpose Poor payment discipline has been a constant problem faced by European companies and has only deteriorated with the current global economic crisis. Even though new legislation has been adopted several times on the European level, the situation has not changed in favor of improved payment discipline. This research aims to determine the correlation between ethical culture of the company and how it influences its payments. Design/methodology/approach The factor structure of Kaptein’s (2008) instrument for measuring ethical culture was analyzed using principal component analysis with varimax rotation. This factor analysis yielded six factors with eigenvalues over 1.00. The reliabilities of the single constructs were as follows: clarity (α = 0.891), feasibility (α = 0.918), discussability (α = 0.955), supportability (α = 0.956), sanctionability (α = 0.879) and transparency (α = 0.801). These six factors explained 78 per cent of the total variance. All six factors were named according to Kaptein’s (2008) proposal, whose factor analysis yielded, in addition to the six factors, the following two factors: “Congruence of supervisors” and “Congruence of management”. Both factors represent the ethical culture dimension that Kaptein (1998) called “Congruence”, which refers to the extent to which superiors’ and managers’ acts are in line with their ethics on the declarative level. Findings The results showed that two dimensions of ethical culture, sanctionability and feasibility, improve payment discipline. Research limitations/implications The results of this study provide an important link between ethical culture and late payments. However, the research has some limitations. The first limitation is the response rate of only 9.1 per cent. The next limitation is geographical location; the results in other European countries could be different. The third limitation of the research arises from the data collection, because ethical culture was evaluated by one person from each enterprise, and the average payment delay was also calculated based only on a sample of invoices. Future research should therefore attempt to confirm the correlation between ethical culture and payment discipline in other European countries. It would be interesting to compare finds among different European countries, to determine whether there are major differences among companies in the field of payment discipline. Originality/value Good payment discipline can be defined as settling obligations to the customer on time. Late payments have been one of the biggest problems in many European economies. Trade credit becomes even more important during economic crises (Guariglia and Mateut, 2006), when investments are in decline, trading volume is reduced, bank credit is harder to obtain and interest rates are increased (Vojinović et al., 2013; Lin and Martin, 2010). Because customers do not fulfill their obligations on time, even enterprises with healthy sales growth encounter cash flow problems (Tsai, 2011). This paper’s empirical research has been implemented in Slovenia because it has some of the worst payment disciplines among European countries. Such research is unique in Slovenia as well as wider.


2016 ◽  
Vol 14 (1) ◽  
pp. 414-432
Author(s):  
Adalto Barbaceia Gonçalves ◽  
Felipe Tumenas Marques

Forecasting interest rates structures plays a fundamental role in the fixed income and bond markets. The development of dynamic modeling, especially after Nelson and Siegel (1987) work, parsimonious models based in a few parameter shed light over a new path for the market players. Despite the extensive literature on the term structure of interest rates modeling and the existence in the Brazilian market of various yield curves from different traded asset classes, the literature focused only in the fixed rate curve. In this work we expand the existing literature on modeling the term structure of Brazilian interest rates evaluating all the yield curves of Brazilian market using the methodology proposed by Nelson and Siegel. We use Non Linear Least Squares (NLLS) to estimate the model parameters for almost 10 years of monthly data and model these parameters with the traditional VAR/VEC model. The results show that it is possible to estimate the Nelson Siegel model for the Brazilian curves. It remains for future research the modeling of their variances as well as the possibility to develop a global Brazilian model using Kalman Filter using the Diebold. Li. and Yue (2006) approach.


2015 ◽  
Vol 13 (4) ◽  
pp. 650
Author(s):  
Felipe Stona ◽  
Jean Amann ◽  
Maurício Delago Morais ◽  
Divanildo Triches ◽  
Igor Clemente Morais

This article aims to investigate the relationship between the term structure of interest rates and macroeconomic factors in selected countries of Latin America, such as Brazil, Chile and Mexico, between 2006 and 2014, on an autoregressive vector model. Specifically, we perform estimations of Nelson-Siegel, Diabold-Li and principal component analysis to test how the change of macroeconomic factors, e.g. inflation, production and unemployment levels affect the yield curves. For Brazil and Mexico, GDP and inflation variables are relevant to change the yield curves, with the former shifting more the level, and the latter with greater influence on the slope. For Chile, inflation had the greatest impact on the level and, specifically for Mexico, the unemployment variable also changed the slope of the yield curve.


2018 ◽  
Vol 8 (3) ◽  
pp. 275-296 ◽  
Author(s):  
Pan Feng ◽  
Junhui Qian

Purpose The purpose of this paper is to analyze and forecast the Chinese term structure of interest rates using functional principal component analysis (FPCA). Design/methodology/approach The authors propose an FPCA-K model using FPCA. The forecasting of the yield curve is based on modeling functional principal component (FPC) scores as standard scalar time series models. The authors evaluate the out-of-sample forecast performance using the root mean square and mean absolute errors. Findings Monthly yield data from January 2002 to December 2016 are used in this paper. The authors find that in the full sample, the first two FPCs account for 98.68 percent of the total variation in the yield curve. The authors then construct an FPCA-K model using the leading principal components. The authors find that the FPCA-K model compares favorably with the functional signal plus noise model, the dynamic Nelson-Siegel models and the random walk model in the out-of-sample forecasting. Practical implications The authors propose a functional approach to analyzing and forecasting the yield curve, which effectively utilizes the smoothness assumption and conveniently addresses the missing-data issue. Originality/value To the best knowledge, the authors are the first to use FPCA in the modeling and forecasting of yield curves.


2018 ◽  
Vol 31 (1) ◽  
pp. 21-52 ◽  
Author(s):  
Surabhi Verma

Big data (BD) is one of the emerging topics in the field of information systems. This article utilized citation and co-citation analysis to explore research articles in the field of BD to examine the scientific development in the area. The research data was retrieved from the WOS database from the period between 2005 and June 2016, which consists of 366 articles. In the citation analysis, this article relies on the degree centrality and betweenness centrality for identifying 38 important papers in BD. In the co-citation analysis, a principal component factor analysis of the co-citation matrix is employed for identifying six major research themes: foundations, BD applications, techniques and technologies, challenges, adoption and impacts and literature review. This literature review is one of the first studies to examine the knowledge structure of BD research in the information systems discipline by using evidence-based analysis methods. Recommendations for future research directions in BD are provided based on the analysis and results of this study.


2013 ◽  
Vol 8 (1) ◽  
pp. 99-130 ◽  
Author(s):  
Şule Şahin ◽  
Andrew J.G. Cairns ◽  
Torsten Kleinow ◽  
A. David Wilkie

AbstractThis paper develops a term structure model for the UK nominal, real and implied inflation spot zero-coupon rates simultaneously. We start with fitting a descriptive yield curve model proposed by Cairns (1998) to fill the missing values for certain given days at certain maturities in the yield curve data provided by the Bank of England. We compare four different fixed ‘exponential rate’ parameter sets and decide the set of parameters which fits the data best. With the chosen set of parameters we fit the Cairns model to the daily values of the term structures. By applying principal component analysis on the hybrid data (Bank of England data and fitted spot rates for the missing values) we find three principal components, which can be described as ‘level’, ‘slope’ and ‘curvature’, for each of these series. We explore the relation between these principal components to construct a ‘yield-only’ model for actuarial applications. Main contribution of this paper is that the models developed in the paper enable the practitioners to forecast three term structures simultaneously and it also provides the forecast for whole term structures rather than just short and long end of the yield curves.


2009 ◽  
Vol 12 (04) ◽  
pp. 465-489 ◽  
Author(s):  
OLIVER BLASKOWITZ ◽  
HELMUT HERWARTZ

In this study, we forecast the term structure of EURIBOR swap rates by means of rolling vector autoregressive (VAR) models. In advance, a principal component analysis (PCA) is adopted to reduce the dimensionality of the term structure. To statistically assess the forecasting performance for particular rates and the level, slope and curvature of the swap term structure, we rely on the Henrikkson–Merton statistic. Economic performance is investigated by means of cash flows implied by alternative trading strategies. Finally, a data-driven, adaptive model selection strategy to "predict the best forecasting model" out of a set of 100 alternative PCA/VAR implementations is shown to outperform forecasting schemes that rely on global homogeneity of the term structure.


2022 ◽  
pp. 1923-1957
Author(s):  
Surabhi Verma

Big data (BD) is one of the emerging topics in the field of information systems. This article utilized citation and co-citation analysis to explore research articles in the field of BD to examine the scientific development in the area. The research data was retrieved from the WOS database from the period between 2005 and June 2016, which consists of 366 articles. In the citation analysis, this article relies on the degree centrality and betweenness centrality for identifying 38 important papers in BD. In the co-citation analysis, a principal component factor analysis of the co-citation matrix is employed for identifying six major research themes: foundations, BD applications, techniques and technologies, challenges, adoption and impacts and literature review. This literature review is one of the first studies to examine the knowledge structure of BD research in the information systems discipline by using evidence-based analysis methods. Recommendations for future research directions in BD are provided based on the analysis and results of this study.


2003 ◽  
Vol 06 (08) ◽  
pp. 885-903 ◽  
Author(s):  
CAIO IBSEN RODRIGUES DE ALMEIDA ◽  
ANTONIO MARCOS DUARTE ◽  
CRISTIANO AUGUSTO COELHO FERNANDES

Principal Component Analysis (PCA) has been traditionally used for identifying the most important factors driving term structures of interest rates movements. Once one maps the term structure dynamics, it can be used in many applications. For instance, portfolio allocation, Asset/Liability models, and risk management, are some of its possible uses. This approach presents very simple implementation algorithm, whenever a time series of the term structure is disposable. Nevertheless, in markets where there is no database for discount bond yields available, this approach cannot be applied. In this article, we exploit properties of an orthogonal decomposition of the term structure to sequentially estimate along time, term structures of interest rates in emerging markets. The methodology, named Legendre Dynamic Model (LDM), consists in building the dynamics of the term structure by using Legendre Polynomials to drive its movements. We propose applying LDM to obtain time series for term structures of interest rates and to study their behavior through the behavior of the Legendre Coefficients levels and first differences properly normalized (Legendre factors). Under the hypothesis of stationarity and serial independence of the Legendre factors, we show that there is asymptotic equivalence between LDM and PCA, concluding that LDM captures PCA as a particular case. As a numerical example, we apply our technique to Brazilian Brady and Global Bond Markets, briefly study the time series characteristics of their term structures, and identify the intensity of the most important basic movements of these term structures.


Sign in / Sign up

Export Citation Format

Share Document