scholarly journals Tail Conditional Expectations Based on Kumaraswamy Dispersion Models

Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1478
Author(s):  
Indranil Ghosh ◽  
Filipe J. Marques

Recently, there seems to be an increasing amount of interest in the use of the tail conditional expectation (TCE) as a useful measure of risk associated with a production process, for example, in the measurement of risk associated with stock returns corresponding to the manufacturing industry, such as the production of electric bulbs, investment in housing development, and financial institutions offering loans to small-scale industries. Companies typically face three types of risk (and associated losses from each of these sources): strategic (S); operational (O); and financial (F) (insurance companies additionally face insurance risks) and they come from multiple sources. For asymmetric and bounded losses (properly adjusted as necessary) that are continuous in nature, we conjecture that risk assessment measures via univariate/bivariate Kumaraswamy distribution will be efficient in the sense that the resulting TCE based on bivariate Kumaraswamy type copulas do not depend on the marginals. In fact, almost all classical measures of tail dependence are such, but they investigate the amount of tail dependence along the main diagonal of copulas, which has often little in common with the concentration of extremes in the copula’s domain of definition. In this article, we examined the above risk measure in the case of a univariate and bivariate Kumaraswamy (KW) portfolio risk, and computed TCE based on bivariate KW type copulas. For illustrative purposes, a well-known Stock indices data set was re-analyzed by computing TCE for the bivariate KW type copulas to determine which pairs produce minimum risk in a two-component risk scenario.

Risks ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 60
Author(s):  
Cláudia Simões ◽  
Luís Oliveira ◽  
Jorge M. Bravo

Protecting against unexpected yield curve, inflation, and longevity shifts are some of the most critical issues institutional and private investors must solve when managing post-retirement income benefits. This paper empirically investigates the performance of alternative immunization strategies for funding targeted multiple liabilities that are fixed in timing but random in size (inflation-linked), i.e., that change stochastically according to consumer price or wage level indexes. The immunization procedure is based on a targeted minimax strategy considering the M-Absolute as the interest rate risk measure. We investigate to what extent the inflation-hedging properties of ILBs in asset liability management strategies targeted to immunize multiple liabilities of random size are superior to that of nominal bonds. We use two alternative datasets comprising daily closing prices for U.S. Treasuries and U.S. inflation-linked bonds from 2000 to 2018. The immunization performance is tested over 3-year and 5-year investment horizons, uses real and not simulated bond data and takes into consideration the impact of transaction costs in the performance of immunization strategies and in the selection of optimal investment strategies. The results show that the multiple liability immunization strategy using inflation-linked bonds outperforms the equivalent strategy using nominal bonds and is robust even in a nearly zero interest rate scenario. These results have important implications in the design and structuring of ALM liability-driven investment strategies, particularly for retirement income providers such as pension schemes or life insurance companies.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 675
Author(s):  
Xuze Zhang ◽  
Saumyadipta Pyne ◽  
Benjamin Kedem

In disease modeling, a key statistical problem is the estimation of lower and upper tail probabilities of health events from given data sets of small size and limited range. Assuming such constraints, we describe a computational framework for the systematic fusion of observations from multiple sources to compute tail probabilities that could not be obtained otherwise due to a lack of lower or upper tail data. The estimation of multivariate lower and upper tail probabilities from a given small reference data set that lacks complete information about such tail data is addressed in terms of pertussis case count data. Fusion of data from multiple sources in conjunction with the density ratio model is used to give probability estimates that are non-obtainable from the empirical distribution. Based on a density ratio model with variable tilts, we first present a univariate fit and, subsequently, improve it with a multivariate extension. In the multivariate analysis, we selected the best model in terms of the Akaike Information Criterion (AIC). Regional prediction, in Washington state, of the number of pertussis cases is approached by providing joint probabilities using fused data from several relatively small samples following the selected density ratio model. The model is validated by a graphical goodness-of-fit plot comparing the estimated reference distribution obtained from the fused data with that of the empirical distribution obtained from the reference sample only.


2021 ◽  
Vol 503 (2) ◽  
pp. 2688-2705
Author(s):  
C Doux ◽  
E Baxter ◽  
P Lemos ◽  
C Chang ◽  
A Alarcon ◽  
...  

ABSTRACT Beyond ΛCDM, physics or systematic errors may cause subsets of a cosmological data set to appear inconsistent when analysed assuming ΛCDM. We present an application of internal consistency tests to measurements from the Dark Energy Survey Year 1 (DES Y1) joint probes analysis. Our analysis relies on computing the posterior predictive distribution (PPD) for these data under the assumption of ΛCDM. We find that the DES Y1 data have an acceptable goodness of fit to ΛCDM, with a probability of finding a worse fit by random chance of p = 0.046. Using numerical PPD tests, supplemented by graphical checks, we show that most of the data vector appears completely consistent with expectations, although we observe a small tension between large- and small-scale measurements. A small part (roughly 1.5 per cent) of the data vector shows an unusually large departure from expectations; excluding this part of the data has negligible impact on cosmological constraints, but does significantly improve the p-value to 0.10. The methodology developed here will be applied to test the consistency of DES Year 3 joint probes data sets.


2021 ◽  
Vol 13 (3) ◽  
pp. 1013
Author(s):  
Whisper Maisiri ◽  
Liezl van Dyk ◽  
Rojanette Coeztee

Industry 4.0 (I4.0) adoption in the manufacturing industry is on the rise across the world, resulting in increased empirical research on barriers and drivers to I4.0 adoption in specific country contexts. However, no similar studies are available that focus on the South African manufacturing industry. Our small-scale interview-based qualitative descriptive study aimed at identifying factors that may inhibit sustainable adoption of I4.0 in the country’s manufacturing industry. The study probed the views and opinions of 16 managers and specialists in the industry, as well as others in supportive roles. Two themes emerged from the thematic analysis: factors that inhibit sustainable adoption of I4.0 and strategies that promote I4.0 adoption in the South African manufacturing industry. The interviews highlighted cultural construct, structural inequalities, noticeable youth unemployment, fragmented task environment, and deficiencies in the education system as key inhibitors. Key strategies identified to promote sustainable adoption of I4.0 include understanding context and applying relevant technologies, strengthening policy and regulatory space, overhauling the education system, and focusing on primary manufacturing. The study offers direction for broader investigations of the specific inhibitors to sustainable I4.0 adoption in the sub-Saharan African developing countries and the strategies for overcoming them.


2021 ◽  
Vol 14 (11) ◽  
pp. 540
Author(s):  
Eyden Samunderu ◽  
Yvonne T. Murahwa

Developments in the world of finance have led the authors to assess the adequacy of using the normal distribution assumptions alone in measuring risk. Cushioning against risk has always created a plethora of complexities and challenges; hence, this paper attempts to analyse statistical properties of various risk measures in a not normal distribution and provide a financial blueprint on how to manage risk. It is assumed that using old assumptions of normality alone in a distribution is not as accurate, which has led to the use of models that do not give accurate risk measures. Our empirical design of study firstly examined an overview of the use of returns in measuring risk and an assessment of the current financial environment. As an alternative to conventional measures, our paper employs a mosaic of risk techniques in order to ascertain the fact that there is no one universal risk measure. The next step involved looking at the current risk proxy measures adopted, such as the Gaussian-based, value at risk (VaR) measure. Furthermore, the authors analysed multiple alternative approaches that do not take into account the normality assumption, such as other variations of VaR, as well as econometric models that can be used in risk measurement and forecasting. Value at risk (VaR) is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. Arguably, VaR represents the most important tool for evaluating market risk as one of the several threats to the global financial system. Upon carrying out an extensive literature review, a data set was applied which was composed of three main asset classes: bonds, equities and hedge funds. The first part was to determine to what extent returns are not normally distributed. After testing the hypothesis, it was found that the majority of returns are not normally distributed but instead exhibit skewness and kurtosis greater or less than three. The study then applied various VaR methods to measure risk in order to determine the most efficient ones. Different timelines were used to carry out stressed value at risks, and it was seen that during periods of crisis, the volatility of asset returns was higher. The other steps that followed examined the relationship of the variables, correlation tests and time series analysis conducted and led to the forecasting of the returns. It was noted that these methods could not be used in isolation. We adopted the use of a mosaic of all the methods from the VaR measures, which included studying the behaviour and relation of assets with each other. Furthermore, we also examined the environment as a whole, then applied forecasting models to accurately value returns; this gave a much more accurate and relevant risk measure as compared to the initial assumption of normality.


Author(s):  
K M Ahtesham Hossain Raju ◽  
Shinji Sato

Response of sand dune when overwashed by tsunami or storm surge, is investigated by conducting small scale laboratory study. Dune consisting of initially wet sand and initially dry sand are tested for three different sand grain sizes. Overtopping of water and the corresponding sediment transport are analyzed. These data set can be used to validate mathematical models associated with dune sediment transport as well as prediction of dune profile.


2020 ◽  
Vol 23 (2) ◽  
pp. 161-172
Author(s):  
Prem Lal Adhikari

 In finance, the relationship between stock returns and trading volume has been the subject of extensive research over the past years. The main motivation for these studies is the central role that trading volume plays in the pricing of financial assets when new information comes in. As being interrelated and interdependent subjects, a study regarding the trading volume and stock returns seem to be vital. It is a well-researched area in developed markets. However, very few pieces of literature are available regarding the Nepalese stock market that explores the association between trading volume and stock return. Realizing this fact, this paper aims to examine the empirical relationship between trading volume and stock returns in the Nepalese stock market using time series data. The study sample is comprised of 49 stocks traded on the Nepal Stock Exchange (NEPSE) from mid-July 2011 to mid-July 2018. This study examines the Granger Causality relationship between stock returns and trading volume using the bivariate VAR model used by de Medeiros and Van Doornik (2008). The study found that the overall Nepalese stock market does not have a causal relationship between trading volume and return on the stock. In the case of sector-wise study, there is a unidirectional causality running from trading volume to stock returns in commercial banks and stock returns to trading volume in finance companies, hydropower companies, and insurance companies. There is no indication of any causal effect in the development bank, hotel, and other sectors. This study also finds that there is no evidence of bidirectional causality relationships in any sector of the Nepalese stock market.


Author(s):  
Tobias Götze ◽  
Marc Gürtler

AbstractReinsurance and CAT bonds are two alternative risk management instruments used by insurance companies. Insurers should be indifferent between the two instruments in a perfect capital market. However, the theoretical literature suggests that insured risk characteristics and market imperfections may influence the effectiveness and efficiency of reinsurance relative to CAT bonds. CAT bonds may add value to insurers’ risk management strategies and may therefore substitute for reinsurance. Our study is the first to empirically analyse if and under what circumstances CAT bonds can substitute for traditional reinsurance. Our analysis of a comprehensive data set comprising U.S. P&C insurers’ financial statements and CAT bond use shows that insurance companies’ choice of risk management instruments is not arbitrary. We find that the added value of CAT bonds mainly stems from non-indemnity bonds and reveal that (non-indemnity) CAT bonds are valuable under high reinsurer default risk, low basis risk and in high-risk layers.


2020 ◽  
Author(s):  
Mieke Kuschnerus ◽  
Roderik Lindenbergh ◽  
Sander Vos

Abstract. Sandy coasts are constantly changing environments governed by complex interacting processes. Permanent laser scanning is a promising technique to monitor such coastal areas and support analysis of geomorphological deformation processes. This novel technique delivers 3D representations of a part of the coast at hourly temporal and centimetre spatial resolution and allows to observe small scale changes in elevation over extended periods of time. These observations have the potential to improve understanding and modelling of coastal deformation processes. However, to be of use to coastal researchers and coastal management, an efficient way to find and extract deformation processes from the large spatio-temporal data set is needed. In order to allow data mining in an automated way, we extract time series in elevation or range and use unsupervised learning algorithms to derive a partitioning of the observed area according to change patterns. We compare three well known clustering algorithms, k-means, agglomerative clustering and DBSCAN, and identify areas that undergo similar evolution during one month. We test if they fulfil our criteria for a suitable clustering algorithm on our exemplary data set. The three clustering methods are applied to time series of 30 epochs (during one month) extracted from a data set of daily scans covering a part of the coast at Kijkduin, the Netherlands. A small section of the beach, where a pile of sand was accumulated by a bulldozer is used to evaluate the performance of the algorithms against a ground truth. The k-means algorithm and agglomerative clustering deliver similar clusters, and both allow to identify a fixed number of dominant deformation processes in sandy coastal areas, such as sand accumulation by a bulldozer or erosion in the intertidal area. The DBSCAN algorithm finds clusters for only about 44 % of the area and turns out to be more suitable for the detection of outliers, caused for example by temporary objects on the beach. Our study provides a methodology to efficiently mine a spatio-temporal data set for predominant deformation patterns with the associated regions, where they occur.


Sign in / Sign up

Export Citation Format

Share Document