Formal Comparison of Copula-AR(1)-t-GARCH(1,1) Models for Sub-Indices of the Stock Index WIG

2016 ◽  
Vol 63 (2) ◽  
pp. 123-148
Author(s):  
Justyna Mokrzycka ◽  
Anna Pajor

Copulas have become one of most popular tools used in modelling the dependencies among financial time series. The main aim of the paper is to formally assess the relative explanatory power of competing bivariate Copula-AR-GARCH models, which differ in assumptions on the conditional dependence structure represented by particular copulas. For the sake of comparison the Copula-AR-GARCH mod-els are estimated using the maximum likelihood method, and next they are informally compared and ranked according to the values of the Akaike (AIC) and of the Schwarz (BIC) information criteria. We apply these tools to the daily growth rates of four sub-indices of the stock index WIG published by the Warsaw Stock Exchange. Our results indicate that the informal use of the information criteria (AIC or BIC) leads to very similar ranks of models as compared to those obtained by the use of the formal Bayesian model comparison.

2020 ◽  
Vol 15 (4) ◽  
pp. 351-361
Author(s):  
Liwei Huang ◽  
Arkady Shemyakin

Skewed t-copulas recently became popular as a modeling tool of non-linear dependence in statistics. In this paper we consider three different versions of skewed t-copulas introduced by Demarta and McNeill; Smith, Gan and Kohn; and Azzalini and Capitanio. Each of these versions represents a generalization of the symmetric t-copula model, allowing for a different treatment of lower and upper tails. Each of them has certain advantages in mathematical construction, inferential tools and interpretability. Our objective is to apply models based on different types of skewed t-copulas to the same financial and insurance applications. We consider comovements of stock index returns and times-to-failure of related vehicle parts under the warranty period. In both cases the treatment of both lower and upper tails of the joint distributions is of a special importance. Skewed t-copula model performance is compared to the benchmark cases of Gaussian and symmetric Student t-copulas. Instruments of comparison include information criteria, goodness-of-fit and tail dependence. A special attention is paid to methods of estimation of copula parameters. Some technical problems with the implementation of maximum likelihood method and the method of moments suggest the use of Bayesian estimation. We discuss the accuracy and computational efficiency of Bayesian estimation versus MLE. Metropolis-Hastings algorithm with block updates was suggested to deal with the problem of intractability of conditionals.


2020 ◽  
Vol 2020 (9) ◽  
Author(s):  
Aditi Krishak ◽  
Shantanu Desai

Abstract We perform an independent search for sinusoidal-based modulation in the recently released ANAIS-112 data, which could be induced by dark matter scatterings. We then evaluate this hypothesis against the null hypothesis that the data contain only background, using four different model comparison techniques. These include frequentist, Bayesian, and two information theory-based criteria (Akaike and Bayesian information criteria). This analysis was done on both the residual data (by subtracting the exponential fit obtained from the ANAIS-112 Collaboration) as well as the total (non-background subtracted) data. We find that according to the Bayesian model comparison test, the null hypothesis of no modulation is decisively favored over a cosine-based annual modulation for the non-background subtracted dataset in the 2–6 keV energy range. None of the other model comparison tests decisively favor any one hypothesis over another. This is the first application of Bayesian and information theory techniques to test the annual modulation hypothesis in ANAIS-112 data, extending our previous work on the DAMA/LIBRA and COSINE-100 data. Our analysis codes have also been made publicly available.


2020 ◽  
Vol 38 (1) ◽  
Author(s):  
Farhan Ahmed ◽  
Salman Bahoo ◽  
Sohail Aslam ◽  
Muhammad Asif Qureshi

This paper aims to analyze the efficient stock market hypothesis as responsive to American Presidential Election, 2016. The meta-analysis has been done combining content analysis and event study methodology. The all major newspapers, news channels, public polls, literature and five important indices as Dow Jones Industrial Average (DJIA), NASDAQ Stock Market Composit Indexe (NASDAQ-COMP), Standard & Poor's 500 Index (SPX-500), New York Stock Exchange Composite Index (NYSE-COMP) and Other U.S Indexes-Russell 2000 (RUT-2000) are critically examined and empirically analyzed. The findings from content analysis reflect that stunned winning of Mr Trump from Republican Party worked as shock for American stock market. From event study, findings confirmed that all the major indices reflected a decline on winning of Trump and losing of Ms. Clinton from Democratic. The results are supported empirically and practically through the political event like BREXIT that resulted in shock to Global stock index and loss of $2 Trillion.


2014 ◽  
pp. 101-117
Author(s):  
Michael D. Lee ◽  
Eric-Jan Wagenmakers

Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 455 ◽  
Author(s):  
Hongjun Guan ◽  
Zongli Dai ◽  
Shuang Guan ◽  
Aiwu Zhao

In time series forecasting, information presentation directly affects prediction efficiency. Most existing time series forecasting models follow logical rules according to the relationships between neighboring states, without considering the inconsistency of fluctuations for a related period. In this paper, we propose a new perspective to study the problem of prediction, in which inconsistency is quantified and regarded as a key characteristic of prediction rules. First, a time series is converted to a fluctuation time series by comparing each of the current data with corresponding previous data. Then, the upward trend of each of fluctuation data is mapped to the truth-membership of a neutrosophic set, while a falsity-membership is used for the downward trend. Information entropy of high-order fluctuation time series is introduced to describe the inconsistency of historical fluctuations and is mapped to the indeterminacy-membership of the neutrosophic set. Finally, an existing similarity measurement method for the neutrosophic set is introduced to find similar states during the forecasting stage. Then, a weighted arithmetic averaging (WAA) aggregation operator is introduced to obtain the forecasting result according to the corresponding similarity. Compared to existing forecasting models, the neutrosophic forecasting model based on information entropy (NFM-IE) can represent both fluctuation trend and fluctuation consistency information. In order to test its performance, we used the proposed model to forecast some realistic time series, such as the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX), the Shanghai Stock Exchange Composite Index (SHSECI), and the Hang Seng Index (HSI). The experimental results show that the proposed model can stably predict for different datasets. Simultaneously, comparing the prediction error to other approaches proves that the model has outstanding prediction accuracy and universality.


2020 ◽  
Vol 501 (2) ◽  
pp. 1663-1676
Author(s):  
R Barnett ◽  
S J Warren ◽  
N J G Cross ◽  
D J Mortlock ◽  
X Fan ◽  
...  

ABSTRACT We present the results of a new, deeper, and complete search for high-redshift 6.5 < z < 9.3 quasars over 977 deg2 of the VISTA Kilo-Degree Infrared Galaxy (VIKING) survey. This exploits a new list-driven data set providing photometry in all bands Z, Y, J, H, Ks, for all sources detected by VIKING in J. We use the Bayesian model comparison (BMC) selection method of Mortlock et al., producing a ranked list of just 21 candidates. The sources ranked 1, 2, 3, and 5 are the four known z > 6.5 quasars in this field. Additional observations of the other 17 candidates, primarily DESI Legacy Survey photometry and ESO FORS2 spectroscopy, confirm that none is a quasar. This is the first complete sample from the VIKING survey, and we provide the computed selection function. We include a detailed comparison of the BMC method against two other selection methods: colour cuts and minimum-χ2 SED fitting. We find that: (i) BMC produces eight times fewer false positives than colour cuts, while also reaching 0.3 mag deeper, (ii) the minimum-χ2 SED-fitting method is extremely efficient but reaches 0.7 mag less deep than the BMC method, and selects only one of the four known quasars. We show that BMC candidates, rejected because their photometric SEDs have high χ2 values, include bright examples of galaxies with very strong [O iii] λλ4959,5007 emission in the Y band, identified in fainter surveys by Matsuoka et al. This is a potential contaminant population in Euclid searches for faint z > 7 quasars, not previously accounted for, and that requires better characterization.


2018 ◽  
Vol 265 ◽  
pp. 271-278 ◽  
Author(s):  
Tyler B. Grove ◽  
Beier Yao ◽  
Savanna A. Mueller ◽  
Merranda McLaughlin ◽  
Vicki L. Ellingrod ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document