scholarly journals INSURANCE VALUATION: A TWO-STEP GENERALISED REGRESSION APPROACH

2021 ◽  
pp. 1-35
Author(s):  
Karim Barigou ◽  
Valeria Bignozzi ◽  
Andreas Tsanakas

Abstract Current approaches to fair valuation in insurance often follow a two-step approach, combining quadratic hedging with application of a risk measure on the residual liability, to obtain a cost-of-capital margin. In such approaches, the preferences represented by the regulatory risk measure are not reflected in the hedging process. We address this issue by an alternative two-step hedging procedure, based on generalised regression arguments, which leads to portfolios that are neutral with respect to a risk measure, such as Value-at-Risk or the expectile. First, a portfolio of traded assets aimed at replicating the liability is determined by local quadratic hedging. Second, the residual liability is hedged using an alternative objective function. The risk margin is then defined as the cost of the capital required to hedge the residual liability. In the case quantile regression is used in the second step, yearly solvency constraints are naturally satisfied; furthermore, the portfolio is a risk minimiser among all hedging portfolios that satisfy such constraints. We present a neural network algorithm for the valuation and hedging of insurance liabilities based on a backward iterations scheme. The algorithm is fairly general and easily applicable, as it only requires simulated paths of risk drivers.

2021 ◽  
pp. 1-29
Author(s):  
Yanhong Chen

ABSTRACT In this paper, we study the optimal reinsurance contracts that minimize the convex combination of the Conditional Value-at-Risk (CVaR) of the insurer’s loss and the reinsurer’s loss over the class of ceded loss functions such that the retained loss function is increasing and the ceded loss function satisfies Vajda condition. Among a general class of reinsurance premium principles that satisfy the properties of risk loading and convex order preserving, the optimal solutions are obtained. Our results show that the optimal ceded loss functions are in the form of five interconnected segments for general reinsurance premium principles, and they can be further simplified to four interconnected segments if more properties are added to reinsurance premium principles. Finally, we derive optimal parameters for the expected value premium principle and give a numerical study to analyze the impact of the weighting factor on the optimal reinsurance.


2021 ◽  
Vol 14 (5) ◽  
pp. 201
Author(s):  
Yuan Hu ◽  
W. Brent Lindquist ◽  
Svetlozar T. Rachev

This paper investigates performance attribution measures as a basis for constraining portfolio optimization. We employ optimizations that minimize conditional value-at-risk and investigate two performance attributes, asset allocation (AA) and the selection effect (SE), as constraints on asset weights. The test portfolio consists of stocks from the Dow Jones Industrial Average index. Values for the performance attributes are established relative to two benchmarks, equi-weighted and price-weighted portfolios of the same stocks. Performance of the optimized portfolios is judged using comparisons of cumulative price and the risk-measures: maximum drawdown, Sharpe ratio, Sortino–Satchell ratio and Rachev ratio. The results suggest that achieving SE performance thresholds requires larger turnover values than that required for achieving comparable AA thresholds. The results also suggest a positive role in price and risk-measure performance for the imposition of constraints on AA and SE.


2009 ◽  
Vol 39 (2) ◽  
pp. 591-613 ◽  
Author(s):  
Andreas Kull

AbstractWe revisit the relative retention problem originally introduced by de Finetti using concepts recently developed in risk theory and quantitative risk management. Instead of using the Variance as a risk measure we consider the Expected Shortfall (Tail-Value-at-Risk) and include capital costs and take constraints on risk capital into account. Starting from a risk-based capital allocation, the paper presents an optimization scheme for sharing risk in a multi-risk class environment. Risk sharing takes place between two portfolios and the pricing of risktransfer reflects both portfolio structures. This allows us to shed more light on the question of how optimal risk sharing is characterized in a situation where risk transfer takes place between parties employing similar risk and performance measures. Recent developments in the regulatory domain (‘risk-based supervision’) pushing for common, insurance industry-wide risk measures underline the importance of this question. The paper includes a simple non-life insurance example illustrating optimal risk transfer in terms of retentions of common reinsurance structures.


2021 ◽  
Vol 14 (11) ◽  
pp. 540
Author(s):  
Eyden Samunderu ◽  
Yvonne T. Murahwa

Developments in the world of finance have led the authors to assess the adequacy of using the normal distribution assumptions alone in measuring risk. Cushioning against risk has always created a plethora of complexities and challenges; hence, this paper attempts to analyse statistical properties of various risk measures in a not normal distribution and provide a financial blueprint on how to manage risk. It is assumed that using old assumptions of normality alone in a distribution is not as accurate, which has led to the use of models that do not give accurate risk measures. Our empirical design of study firstly examined an overview of the use of returns in measuring risk and an assessment of the current financial environment. As an alternative to conventional measures, our paper employs a mosaic of risk techniques in order to ascertain the fact that there is no one universal risk measure. The next step involved looking at the current risk proxy measures adopted, such as the Gaussian-based, value at risk (VaR) measure. Furthermore, the authors analysed multiple alternative approaches that do not take into account the normality assumption, such as other variations of VaR, as well as econometric models that can be used in risk measurement and forecasting. Value at risk (VaR) is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. Arguably, VaR represents the most important tool for evaluating market risk as one of the several threats to the global financial system. Upon carrying out an extensive literature review, a data set was applied which was composed of three main asset classes: bonds, equities and hedge funds. The first part was to determine to what extent returns are not normally distributed. After testing the hypothesis, it was found that the majority of returns are not normally distributed but instead exhibit skewness and kurtosis greater or less than three. The study then applied various VaR methods to measure risk in order to determine the most efficient ones. Different timelines were used to carry out stressed value at risks, and it was seen that during periods of crisis, the volatility of asset returns was higher. The other steps that followed examined the relationship of the variables, correlation tests and time series analysis conducted and led to the forecasting of the returns. It was noted that these methods could not be used in isolation. We adopted the use of a mosaic of all the methods from the VaR measures, which included studying the behaviour and relation of assets with each other. Furthermore, we also examined the environment as a whole, then applied forecasting models to accurately value returns; this gave a much more accurate and relevant risk measure as compared to the initial assumption of normality.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245904
Author(s):  
Viviane Naimy ◽  
Omar Haddad ◽  
Gema Fernández-Avilés ◽  
Rim El Khoury

This paper provides a thorough overview and further clarification surrounding the volatility behavior of the major six cryptocurrencies (Bitcoin, Ripple, Litecoin, Monero, Dash and Dogecoin) with respect to world currencies (Euro, British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen), the relative performance of diverse GARCH-type specifications namely the SGARCH, IGARCH (1,1), EGARCH (1,1), GJR-GARCH (1,1), APARCH (1,1), TGARCH (1,1) and CGARCH (1,1), and the forecasting performance of the Value at Risk measure. The sampled period extends from October 13th 2015 till November 18th 2019. The findings evidenced the superiority of the IGARCH model, in both the in-sample and the out-of-sample contexts, when it deals with forecasting the volatility of world currencies, namely the British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen. The CGARCH alternative modeled the Euro almost perfectly during both periods. Advanced GARCH models better depicted asymmetries in cryptocurrencies’ volatility and revealed persistence and “intensifying” levels in their volatility. The IGARCH was the best performing model for Monero. As for the remaining cryptocurrencies, the GJR-GARCH model proved to be superior during the in-sample period while the CGARCH and TGARCH specifications were the optimal ones in the out-of-sample interval. The VaR forecasting performance is enhanced with the use of the asymmetric GARCH models. The VaR results provided a very accurate measure in determining the level of downside risk exposing the selected exchange currencies at all confidence levels. However, the outcomes were far from being uniform for the selected cryptocurrencies: convincing for Dash and Dogcoin, acceptable for Litecoin and Monero and unconvincing for Bitcoin and Ripple, where the (optimal) model was not rejected only at the 99% confidence level.


2021 ◽  
Author(s):  
Xuecheng Yin ◽  
Esra Buyuktahtakin

Existing compartmental-logistics models in epidemics control are limited in terms of optimizing the allocation of vaccines and treatment resources under a risk-averse objective. In this paper, we present a data-driven, mean-risk, multi-stage, stochastic epidemics-vaccination-logistics model that evaluates various disease growth scenarios under the Conditional Value-at-Risk (CVaR) risk measure to optimize the distribution of treatment centers, resources, and vaccines, while minimizing the total expected number of infections, deaths, and close contacts of infected people under a limited budget. We integrate a new ring vaccination compartment into a Susceptible-Infected-Treated-Recovered-Funeral-Burial epidemics-logistics model. Our formulation involves uncertainty both in the vaccine supply and the disease transmission rate. Here, we also consider the risk of experiencing scenarios that lead to adverse outcomes in terms of the number of infected and dead people due to the epidemic. Combining the risk-neutral objective with a risk measure allows for a trade-off between the weighted expected impact of the outbreak and the expected risks associated with experiencing extremely disastrous scenarios. We incorporate human mobility into the model and develop a new method to estimate the migration rate between each region when data on migration rates is not available. We apply our multi-stage stochastic mixed-integer programming model to the case of controlling the 2018-2020 Ebola Virus Disease (EVD) in the Democratic Republic of the Congo (DRC) using real data. Our results show that increasing the risk-aversion by emphasizing potentially disastrous outbreak scenarios reduces the expected risk related to adverse scenarios at the price of the increased expected number of infections and deaths over all possible scenarios. We also find that isolating and treating infected individuals are the most efficient ways to slow the transmission of the disease, while vaccination is supplementary to primary interventions on reducing the number of infections. Furthermore, our analysis indicates that vaccine acceptance rates affect the optimal vaccine allocation only at the initial stages of the vaccine rollout under a tight vaccine supply.


2012 ◽  
Vol 3 (1) ◽  
pp. 150-157 ◽  
Author(s):  
Suresh Andrew Sethi ◽  
Mike Dalton

Abstract Traditional measures that quantify variation in natural resource systems include both upside and downside deviations as contributing to variability, such as standard deviation or the coefficient of variation. Here we introduce three risk measures from investment theory, which quantify variability in natural resource systems by analyzing either upside or downside outcomes and typical or extreme outcomes separately: semideviation, conditional value-at-risk, and probability of ruin. Risk measures can be custom tailored to frame variability as a performance measure in terms directly meaningful to specific management objectives, such as presenting risk as harvest expected in an extreme bad year, or by characterizing risk as the probability of fishery escapement falling below a prescribed threshold. In this paper, we present formulae, empirical examples from commercial fisheries, and R code to calculate three risk measures. In addition, we evaluated risk measure performance with simulated data, and we found that risk measures can provide unbiased estimates at small sample sizes. By decomposing complex variability into quantitative metrics, we envision risk measures to be useful across a range of wildlife management scenarios, including policy decision analyses, comparative analyses across systems, and tracking the state of natural resource systems through time.


2019 ◽  
Vol 12 (4) ◽  
pp. 159 ◽  
Author(s):  
Yuyang Cheng ◽  
Marcos Escobar-Anel ◽  
Zhenxian Gong

This paper proposes and investigates a multivariate 4/2 Factor Model. The name 4/2 comes from the superposition of a CIR term and a 3/2-model component. Our model goes multidimensional along the lines of a principal component and factor covariance decomposition. We find conditions for well-defined changes of measure and we also find two key characteristic functions in closed-form, which help with pricing and risk measure calculations. In a numerical example, we demonstrate the significant impact of the newly added 3/2 component (parameter b) and the common factor (a), both with respect to changes on the implied volatility surface (up to 100%) and on two risk measures: value at risk and expected shortfall where an increase of up to 29% was detected.


Sign in / Sign up

Export Citation Format

Share Document