Distributional Uncertainty for Spectral Risk Measures

Author(s):  
Mohammed Berkhouch

Spectral risk measures, primarily introduced as an extension for expected shortfall, constitute a prominent class of risk measures that take account of the decision-makersrisk-aversion. In practice, risk measures are often estimated from data distributions, and due to the uncertain character of the financial market, one has no specific criterium to pick the appropriate distribution. Therefore, risk assessment under different possible scenarios (such as financial crises or outbreaks) is a source of uncertainty that may lead to concerning financial losses. The chapter addresses this issue, first, by adapting a robust framework for spectral risk measures, by considering the whole set of possible scenarios instead of making a specific choice. Second, the author proposes a variability-type approach as an alternative for quantifying uncertainty, since measuring uncertainty provides us with information about how far our risk measurement process could be impacted by uncertainty. Furthermore, the stated theory is illustrated with a practical example from the NASDAQ index.

Author(s):  
Nicole Bäuerle ◽  
Alexander Glauner

AbstractWe study the minimization of a spectral risk measure of the total discounted cost generated by a Markov Decision Process (MDP) over a finite or infinite planning horizon. The MDP is assumed to have Borel state and action spaces and the cost function may be unbounded above. The optimization problem is split into two minimization problems using an infimum representation for spectral risk measures. We show that the inner minimization problem can be solved as an ordinary MDP on an extended state space and give sufficient conditions under which an optimal policy exists. Regarding the infinite dimensional outer minimization problem, we prove the existence of a solution and derive an algorithm for its numerical approximation. Our results include the findings in Bäuerle and Ott (Math Methods Oper Res 74(3):361–379, 2011) in the special case that the risk measure is Expected Shortfall. As an application, we present a dynamic extension of the classical static optimal reinsurance problem, where an insurance company minimizes its cost of capital.


Criminology ◽  
2021 ◽  
Author(s):  
James C. Oleson

The evidence-based practice (EBP) movement can be traced to a 1992 article in the Journal of the American Medical Association, although decision-making with empirical evidence (rather than tradition, anecdote, or intuition) is obviously much older. Neverthless, for the last twenty-five years, EBP has played a pivotal role in criminal justice, particularly within community corrections. While the prediction of recidivism in parole or probation decisions has attracted relatively little attention, the use of risk measures by sentencing judges is controversial. This might be because sentencing typically involves both backward-looking decisions, related to the blameworthiness of the crime, as well as forward-looking decisions, about the offender’s prospective risk of recidivism. Evidence-based sentencing quantifies the predictive aspects of decision-making by incorporating an assessment of risk factors (which increase recidivism risk), protective factors (which reduce recidivism risk), criminogenic needs (impairments that, if addressed, will reduce recidivism risk), the measurement of recidivism risk, and the identification of optimal recidivism-reducing sentencing interventions. Proponents for evidence-based sentencing claim that it can allow judges to “sentence smarter” by using data to distinguish high-risk offenders (who might be imprisoned to mitigate their recidivism risk) from low-risk offenders (who might be released into the community with relatively little danger). This, proponents suggest, can reduce unnecessary incarceration, decrease costs, and enhance community safety. Critics, however, note that risk assessment typically looks beyond criminal conduct, incorporating demographic and socioeconomic variables. Even if a risk factor is facially neutral (e.g., criminal history), it might operate as a proxy for a constitutionally protected category (e.g., race). The same objectionable variables are used widely in presentence reports, but their incorporation into an actuarial risk score has greater potential to obfuscate facts and reify underlying disparities. The evidence-based sentencing literature is dynamic and rapidly evolving, but this bibliography identifies sources that might prove useful. It first outlines the theoretical foundations of traditional (non-evidence-based) sentencing, identifying resources and overviews. It then identifies sources related to decision-making and prediction, risk assessment logic, criminogenic needs, and responsivity. The bibliography then describes and defends evidence-based sentencing, and identifies works on sentencing variables and risk assessment instruments. It then relates evidence-based sentencing to big data and identifies data issues. Several works on constitutional problems are listed, the proxies problem is described, and sources on philosophical issues are described. The bibliography concludes with a description of validation research, the politics of evidence-based sentencing, and the identification of several current initiatives.


2021 ◽  
pp. 1-16
Author(s):  
Tingrong Qin ◽  
Guoliang Ma ◽  
Dongyang Li ◽  
Xinjie Zhou ◽  
Xingjie He ◽  
...  

Abstract A ship's perception of risk is an important basis for collision avoidance. To improve such perception, several risk measurement parameters on the ship domain are determined, including the approach factor, the time to domain violation (TDV) and the possible collision domain. Then, a risk hierarchy prewarning (RHP) model based on the violation detection of a ship domain is proposed, in which a two-level alarm scheme is adopted accordingly. A low-intensity alarm will be activated by reaching the minimum approach factor and the TDV threshold, and a high-intensity alarm will be activated by the factor of the possible collision domain and the TDV threshold. Subsequently, a novel guard zone in ARPA radar utilising the RHP model has been developed to establish a ship's risk perception system for officers on watch at sea. The model proposed in this paper can not only enhance the veracity of risk assessment around our own ship, but also be used as a decision support system for collision avoidance.


2021 ◽  
Vol 20 (10) ◽  
pp. 1933-1950
Author(s):  
Nikolai V. FIROV ◽  
Sergei A. SOROKIN

Subject. The article addresses scientific and technical risk and financial losses of the customer in the process of research and development works on the creation of complex technical systems. Objectives. The study aims at constructing and analyzing the dependence of scientific and technical risk and financial losses of the customer on the planned volume of development works and the financial resources invested in them. Methods. We apply methods of probability theory and mathematical statistics, system and regression analysis, risk assessment and management. The paper rests on data on completed development projects for complex technical systems creation. Results. We formulated methodological provisions for assessing scientific and technical risk, arising in the process of development works on complex technical systems. The paper presents an algorithm for calculating the expected financial losses from works implementation. The problem of minimizing financial losses associated with scientific and technical risk is formulated and formalized. The feasibility of proposed provisions and recommendations is confirmed by a practical example. Conclusions. To assess risks, it is important to consider the impact of the degree of difference between the main characteristics of developed product and its prototype on the required amount of works at development stage. This enables to build regression dependencies of the volume of works at the development stage on a specified factor, which are later used to assess the scientific and technical risk and associated financial losses.


Author(s):  
Inés Jiménez ◽  
Andrés Mora-Valencia ◽  
Trino-Manuel Ñíguez ◽  
Javier Perote

The semi-nonparametric (SNP) modeling of the return distribution has been proved to be a flexible and accurate methodology for portfolio risk management that allows two-step estimation of the dynamic conditional correlation (DCC) matrix. For this SNP-DCC model, we propose a stepwise procedure to compute pairwise conditional correlations under bivariate marginal SNP distributions, overcoming the curse of dimensionality. The procedure is compared to the assumption of Dynamic Equicorrelation (DECO), which is a parsimonious model when correlations among the assets are not significantly different but requires joint estimation of the multivariate SNP model. The risk assessment of both methodologies is tested for a portfolio on cryptocurrencies by implementing backtesting techniques and for different risk measures: Value-at-Risk, Expected Shortfall and Median Shortfall. The results support our proposal showing that the SNP-DCC model has better performance for a smaller confidence level than the SNP-DECO model, although both models perform similarly for higher confidence levels.


Sign in / Sign up

Export Citation Format

Share Document