scholarly journals Validação de Modelos Internos no Brasil: Análise de Metodologias de Backtesting de VaR

2006 ◽  
Vol 4 (1) ◽  
pp. 97
Author(s):  
Alan Cosme Rodrigues da Silva ◽  
Claudio Henrique Da Silveira Barbedo ◽  
Gustavo Silva Araújo ◽  
Myrian Beatriz Eiras das Neves

The purpose of this paper is to analyze backtesting methodologies of VaR, focusing on aspects as suitability to volatile markets and limited data set. We verify, from regulatory standpoint, tests to complement the Basel traffic light results, using simulated and real data. The results indicate that tests based on failures proportion are not adequate for small samples even fro 1,000 observations. The Basel criterion is conservative and has low power, which does not invalidate its application, as the criterion is only one of the procedures adopted in internal model validation process. Thus, it is suggested using tests that capture the shape of returns distribution, as the Kuiper test, in addition to the Basel criterion.

2018 ◽  
Vol 79 (2) ◽  
pp. 358-384 ◽  
Author(s):  
Thomas Jaki ◽  
Minjung Kim ◽  
Andrea Lamont ◽  
Melissa George ◽  
Chi Chang ◽  
...  

Regression mixture models are a statistical approach used for estimating heterogeneity in effects. This study investigates the impact of sample size on regression mixture’s ability to produce “stable” results. Monte Carlo simulations and analysis of resamples from an application data set were used to illustrate the types of problems that may occur with small samples in real data sets. The results suggest that (a) when class separation is low, very large sample sizes may be needed to obtain stable results; (b) it may often be necessary to consider a preponderance of evidence in latent class enumeration; (c) regression mixtures with ordinal outcomes result in even more instability; and (d) with small samples, it is possible to obtain spurious results without any clear indication of there being a problem.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


2021 ◽  
Vol 1978 (1) ◽  
pp. 012047
Author(s):  
Xiaona Sheng ◽  
Yuqiu Ma ◽  
Jiabin Zhou ◽  
Jingjing Zhou

2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


Genetics ◽  
1998 ◽  
Vol 149 (3) ◽  
pp. 1547-1555 ◽  
Author(s):  
Wouter Coppieters ◽  
Alexandre Kvasz ◽  
Frédéric Farnir ◽  
Juan-Jose Arranz ◽  
Bernard Grisart ◽  
...  

Abstract We describe the development of a multipoint nonparametric quantitative trait loci mapping method based on the Wilcoxon rank-sum test applicable to outbred half-sib pedigrees. The method has been evaluated on a simulated dataset and its efficiency compared with interval mapping by using regression. It was shown that the rank-based approach is slightly inferior to regression when the residual variance is homoscedastic normal; however, in three out of four other scenarios envisaged, i.e., residual variance heteroscedastic normal, homoscedastic skewed, and homoscedastic positively kurtosed, the latter outperforms the former one. Both methods were applied to a real data set analyzing the effect of bovine chromosome 6 on milk yield and composition by using a 125-cM map comprising 15 microsatellites and a granddaughter design counting 1158 Holstein-Friesian sires.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 934
Author(s):  
Yuxuan Zhang ◽  
Kaiwei Liu ◽  
Wenhao Gui

For the purpose of improving the statistical efficiency of estimators in life-testing experiments, generalized Type-I hybrid censoring has lately been implemented by guaranteeing that experiments only terminate after a certain number of failures appear. With the wide applications of bathtub-shaped distribution in engineering areas and the recently introduced generalized Type-I hybrid censoring scheme, considering that there is no work coalescing this certain type of censoring model with a bathtub-shaped distribution, we consider the parameter inference under generalized Type-I hybrid censoring. First, estimations of the unknown scale parameter and the reliability function are obtained under the Bayesian method based on LINEX and squared error loss functions with a conjugate gamma prior. The comparison of estimations under the E-Bayesian method for different prior distributions and loss functions is analyzed. Additionally, Bayesian and E-Bayesian estimations with two unknown parameters are introduced. Furthermore, to verify the robustness of the estimations above, the Monte Carlo method is introduced for the simulation study. Finally, the application of the discussed inference in practice is illustrated by analyzing a real data set.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Ryuho Kataoka

Abstract Statistical distributions are investigated for magnetic storms, sudden commencements (SCs), and substorms to identify the possible amplitude of the one in 100-year and 1000-year events from a limited data set of less than 100 years. The lists of magnetic storms and SCs are provided from Kakioka Magnetic Observatory, while the lists of substorms are obtained from SuperMAG. It is found that majorities of events essentially follow the log-normal distribution, as expected from the random output from a complex system. However, it is uncertain that large-amplitude events follow the same log-normal distributions, and rather follow the power-law distributions. Based on the statistical distributions, the probable amplitudes of the 100-year (1000-year) events can be estimated for magnetic storms, SCs, and substorms as approximately 750 nT (1100 nT), 230 nT (450 nT), and 5000 nT (6200 nT), respectively. The possible origin to cause the statistical distributions is also discussed, consulting the other space weather phenomena such as solar flares, coronal mass ejections, and solar energetic particles.


Sign in / Sign up

Export Citation Format

Share Document