Distorted Copula-Based Probability Distribution of a Counting Hierarchical Variable: A Credit Risk Application

2016 ◽  
Vol 15 (02) ◽  
pp. 285-310 ◽  
Author(s):  
Enrico Bernardi ◽  
Silvia Romagnoli

In this paper, we propose a novel approach for the computation of the probability distribution of a counting variable linked to a multivariate hierarchical Archimedean copula function. The hierarchy has a twofold impact: it acts on the aggregation step but also it determines the arrival policy of the random event. The novelty of this work is to introduce this policy, formalized as an arrival matrix, i.e., a random matrix of dependent 0–1 random variables, into the model. This arrival matrix represents the set of distorted (by the policy itself) combinatorial distributions of the event, i.e., of the most probable scenarios. To this distorted version of the [Formula: see text] approach [see Ref. 7 and Ref. 27], we are now able to apply a pure hierarchical Archimedean dependence structure among variables. As an empirical application, we study the problem of evaluating the probability distribution of losses related to the default of various type of counterparts in a structured portfolio exposed to the credit risk of a selected set among the major banks of European area and to the correlations among these risks.

Risks ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 114
Author(s):  
Paritosh Navinchandra Jha ◽  
Marco Cucculelli

The paper introduces a novel approach to ensemble modeling as a weighted model average technique. The proposed idea is prudent, simple to understand, and easy to implement compared to the Bayesian and frequentist approach. The paper provides both theoretical and empirical contributions for assessing credit risk (probability of default) effectively in a new way by creating an ensemble model as a weighted linear combination of machine learning models. The idea can be generalized to any classification problems in other domains where ensemble-type modeling is a subject of interest and is not limited to an unbalanced dataset or credit risk assessment. The results suggest a better forecasting performance compared to the single best well-known machine learning of parametric, non-parametric, and other ensemble models. The scope of our approach can be extended to any further improvement in estimating weights differently that may be beneficial to enhance the performance of the model average as a future research direction.


2006 ◽  
Vol 05 (03) ◽  
pp. 483-493 ◽  
Author(s):  
PING LI ◽  
HOUSHENG CHEN ◽  
XIAOTIE DENG ◽  
SHUNMING ZHANG

Default correlation is the key point for the pricing of multi-name credit derivatives. In this paper, we apply copulas to characterize the dependence structure of defaults, determine the joint default distribution, and give the price for a specific kind of multi-name credit derivative — collateralized debt obligation (CDO). We also analyze two important factors influencing the pricing of multi-name credit derivatives, recovery rates and copula function. Finally, we apply Clayton copula, in a numerical example, to simulate default times taking specific underlying recovery rates and average recovery rates, then price the tranches of a given CDO and then analyze the results.


2013 ◽  
Vol 12 (3) ◽  
pp. 651-676 ◽  
Author(s):  
Bryden Cais ◽  
Jordan S. Ellenberg ◽  
David Zureick-Brown

AbstractWe describe a probability distribution on isomorphism classes of principally quasi-polarized $p$-divisible groups over a finite field $k$ of characteristic $p$ which can reasonably be thought of as a ‘uniform distribution’, and we compute the distribution of various statistics ($p$-corank, $a$-number, etc.) of $p$-divisible groups drawn from this distribution. It is then natural to ask to what extent the $p$-divisible groups attached to a randomly chosen hyperelliptic curve (respectively, curve; respectively, abelian variety) over $k$ are uniformly distributed in this sense. This heuristic is analogous to conjectures of Cohen–Lenstra type for $\text{char~} k\not = p$, in which case the random $p$-divisible group is defined by a random matrix recording the action of Frobenius. Extensive numerical investigation reveals some cases of agreement with the heuristic and some interesting discrepancies. For example, plane curves over ${\mathbf{F} }_{3} $ appear substantially less likely to be ordinary than hyperelliptic curves over ${\mathbf{F} }_{3} $.


2011 ◽  
Vol 317-319 ◽  
pp. 681-684
Author(s):  
Yi Sheng Huang ◽  
Ho Shan Chiang

A novel approach for probabilistic timed structure that is based on combining the formalisms of timed automata and probabilistic automata representation of the system is proposed. Due to their real-valued clocks can measure the passage of time and transitions can be probabilistic such that it can be expressed as a discrete probability distribution on the set of target states. The usage of clock variables and the specification of state space are illustrated with real value time applications. The transitions between states are probabilistic by events which describe either the occurrence of faults or normal working conditions. Additionally, the passage of discrete time and transitions can be probabilistic by mean of the theory of expectation sets to obtain a unified measure reasoning strategy.


2020 ◽  
Vol 13 (6) ◽  
pp. 129
Author(s):  
Annalisa Di Clemente

This work aims to illustrate an advanced quantitative methodology for measuring the credit risk of a loan portfolio allowing for diversification effects. Also, this methodology can allocate the credit capital coherently to each counterparty in the portfolio. The analytical approach used for estimating the portfolio credit risk is a binomial type based on a Monte Carlo Simulation. This method takes into account the default correlations among the credit counterparties in the portfolio by following a copula approach and utilizing the asset return correlations of the obligors, as estimated by rigorous statistical methods. Moreover, this model considers the recovery rates as stochastic and dependent on each other and on the time until defaults. The methodology utilized for coherently allocating credit capital in the portfolio estimates the marginal contributions of each obligor to the overall risk of the loan portfolio in terms of Expected Shortfall (ES), a risk measure more coherent and conservative than the traditional measure of Value-at-Risk (VaR). Finally, this advanced analytical structure is implemented to a hypothetical, but typical, loan portfolio of an Italian commercial bank operating across the overall national country. The national loan portfolio is composed of 17 sub-portfolios, or geographic clusters of credit exposures to 10,500 non-financial firms (or corporates) belonging to each geo-cluster or sub-portfolio. The outcomes, in terms of correlations, portfolio risk measures and capital allocations obtained from this advanced analytical framework, are compared with the results found by implementing the Internal Rating Based (IRB) approach of Basel II and III. Our chief conclusion is that the IRB model is unable to capture the real credit risk of loan portfolios because it does not take into account the actual dependence structure among the default events, and between the recovery rates and the default events. We underline that the adoption of this regulatory model can produce a dangerous underestimation of the portfolio credit risk, especially when the economic uncertainty and the volatility of the financial markets increase.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Nachatchapong Kaewsompong ◽  
Paravee Maneejuk ◽  
Woraphon Yamaka

We propose a high-dimensional copula to model the dependence structure of the seemingly unrelated quantile regression. As the conventional model faces with the strong assumption of the multivariate normal distribution and the linear dependence structure, thus, we apply the multivariate exchangeable copula function to relax this assumption. As there are many parameters to be estimated, we consider the Bayesian Markov chain Monte Carlo approach to estimate the parameter interests in the model. Four simulation studies are conducted to assess the performance of our proposed model and Bayesian estimation. Satisfactory results from simulation studies are obtained suggesting the good performance and reliability of the Bayesian method used in our proposed model. The real data analysis is also provided, and the empirical comparison indicates our proposed model outperforms the conventional models in all considered quantile levels.


2015 ◽  
Vol 4 (4) ◽  
pp. 188
Author(s):  
HERLINA HIDAYATI ◽  
KOMANG DHARMAWAN ◽  
I WAYAN SUMARJAYA

Copula is already widely used in financial assets, especially in risk management. It is due to the ability of copula, to capture the nonlinear dependence structure on multivariate assets. In addition, using copula function doesn’t require the assumption of normal distribution. There fore it is suitable to be applied to financial data. To manage a risk the necessary measurement tools can help mitigate the risks. One measure that can be used to measure risk is Value at Risk (VaR). Although VaR is very popular, it has several weaknesses. To overcome the weakness in VaR, an alternative risk measure called CVaR can be used. The porpose of this study is to estimate CVaR using Gaussian copula. The data we used are the closing price of Facebook and Twitter stocks. The results from the calculation using 90%  confidence level showed that the risk that may be experienced is at 4,7%, for 95% confidence level it is at 6,1%, and for 99% confidence level it is at 10,6%.


2014 ◽  
Vol 40 (8) ◽  
pp. 758-769
Author(s):  
Weiou Wu ◽  
David G. McMillan

Purpose – The purpose of this paper is to examine the dynamic dependence structure in credit risk between the money market and the derivatives market during 2004-2009. The authors use the TED spread to measure credit risk in the money market and CDS index spread for the derivatives market. Design/methodology/approach – The dependence structure is measured by a time-varying Gaussian copula. A copula is a function that joins one-dimensional distribution functions together to form multivariate distribution functions. The copula contains all the information on the dependence structure of the random variables while also removing the linear correlation restriction. Therefore, provides a straightforward way of modelling non-linear and non-normal joint distributions. Findings – The results show that the correlation between these two markets while fluctuating with a general upward trend prior to 2007 exhibited a noticeably higher correlation after 2007. This points to the evidence of credit contagion during the crisis. Three different phases are identified for the crisis period which sheds light on the nature of contagion mechanisms in financial markets. The correlation of the two spreads fell in early 2009, although remained higher than the pre-crisis level. This is partly due to policy intervention that lowered the TED spread while the CDS spread remained higher due to the Eurozone sovereign debt crisis. Originality/value – The paper examines the relationship between the TED and CDS spreads which measure credit risk in an economy. This paper contributes to the literature on dynamic co-movement, contagion effects and risk linkages.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 724 ◽  
Author(s):  
Fuqiang Sun ◽  
Wendi Zhang ◽  
Ning Wang ◽  
Wei Zhang

Degradation analysis has been widely used in reliability modeling problems of complex systems. A system with complex structure and various functions may have multiple degradation features, and any of them may be a cause of product failure. Typically, these features are not independent of each other, and the dependence of multiple degradation processes in a system cannot be ignored. Therefore, the premise of multivariate degradation modeling is to capture and measure the dependence among multiple features. To address this problem, this paper adopts copula entropy, which is a combination of the copula function and information entropy theory, to measure the dependence among different degradation processes. The copula function was employed to identify the complex dependence structure of performance features, and information entropy theory was used to quantify the degree of dependence. An engineering case was utilized to illustrate the effectiveness of the proposed method. The results show that this method is valid for the dependence measurement of multiple degradation processes.


2016 ◽  
Author(s):  
Damian Brzyski ◽  
Christine B. Peterson ◽  
Piotr Sobczyk ◽  
Emmanuel J. Candés ◽  
Malgorzata Bogdan ◽  
...  

AbstractWith the rise of both the number and the complexity of traits of interest, control of the false discovery rate (FDR) in genetic association studies has become an increasingly appealing and accepted target for multiple comparison adjustment. While a number of robust FDR controlling strategies exist, the nature of this error rate is intimately tied to the precise way in which discoveries are counted, and the performance of FDR controlling procedures is satisfactory only if there is a one-to-one correspondence between what scientists describe as unique discoveries and the number of rejected hypotheses. The presence of linkage disequilibrium between markers in genome-wide association studies (GWAS) often leads researchers to consider the signal associated to multiple neighboring SNPs as indicating the existence of a single genomic locus with possible influence on the phenotype. This a posteriori aggregation of rejected hypotheses results in inflation of the relevant FDR. We propose a novel approach to FDR control that is based on pre-screening to identify the level of resolution of distinct hypotheses. We show how FDR controlling strategies can be adapted to account for this initial selection both with theoretical results and simulations that mimic the dependence structure to be expected in GWAS. We demonstrate that our approach is versatile and useful when the data are analyzed using both tests based on single marker and multivariate regression. We provide an R package that allows practitioners to apply our procedure on standard GWAS format data, and illustrate its performance on lipid traits in the NFBC66 cohort study.


Sign in / Sign up

Export Citation Format

Share Document