scholarly journals ESTIMATION OF TIME-VARYING COVARIANCE MATRICES FOR LARGE DATASETS

2021 ◽  
pp. 1-35
Author(s):  
Yiannis Dendramis ◽  
Liudas Giraitis ◽  
George Kapetanios

Abstract Time variation is a fundamental problem in statistical and econometric analysis of macroeconomic and financial data. Recently, there has been considerable focus on developing econometric modelling that enables stochastic structural change in model parameters and on model estimation by Bayesian or nonparametric kernel methods. In the context of the estimation of covariance matrices of large dimensional panels, such data requires taking into account time variation, possible dependence and heavy-tailed distributions. In this paper, we introduce a nonparametric version of regularization techniques for sparse large covariance matrices, developed by Bickel and Levina (2008) and others. We focus on the robustness of such a procedure to time variation, dependence and heavy-tailedness of distributions. The paper includes a set of results on Bernstein type inequalities for dependent unbounded variables which are expected to be applicable in econometric analysis beyond estimation of large covariance matrices. We discuss the utility of the robust thresholding method, comparing it with other estimators in simulations and an empirical application on the design of minimum variance portfolios.

Axioms ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 38 ◽  
Author(s):  
Mohsen Maleki ◽  
Javier Contreras-Reyes ◽  
Mohammad Mahmoudi

In this paper, we examine the finite mixture (FM) model with a flexible class of two-piece distributions based on the scale mixtures of normal (TP-SMN) family components. This family allows the development of a robust estimation of FM models. The TP-SMN is a rich class of distributions that covers symmetric/asymmetric and light/heavy tailed distributions. It represents an alternative family to the well-known scale mixtures of the skew normal (SMSN) family studied by Branco and Dey (2001). Also, the TP-SMN covers the SMN (normal, t, slash, and contaminated normal distributions) as the symmetric members and two-piece versions of them as asymmetric members. A key feature of this study is using a suitable hierarchical representation of the family to obtain maximum likelihood estimates of model parameters via an EM-type algorithm. The performances of the proposed robust model are demonstrated using simulated and real data, and then compared to other finite mixture of SMSN models.


Author(s):  
Stefan Thurner ◽  
Rudolf Hanel ◽  
Peter Klimekl

Phenomena, systems, and processes are rarely purely deterministic, but contain stochastic,probabilistic, or random components. For that reason, a probabilistic descriptionof most phenomena is necessary. Probability theory provides us with the tools for thistask. Here, we provide a crash course on the most important notions of probabilityand random processes, such as odds, probability, expectation, variance, and so on. Wedescribe the most elementary stochastic event—the trial—and develop the notion of urnmodels. We discuss basic facts about random variables and the elementary operationsthat can be performed on them. We learn how to compose simple stochastic processesfrom elementary stochastic events, and discuss random processes as temporal sequencesof trials, such as Bernoulli and Markov processes. We touch upon the basic logic ofBayesian reasoning. We discuss a number of classical distribution functions, includingpower laws and other fat- or heavy-tailed distributions.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 70
Author(s):  
Mei Ling Huang ◽  
Xiang Raney-Yan

The high quantile estimation of heavy tailed distributions has many important applications. There are theoretical difficulties in studying heavy tailed distributions since they often have infinite moments. There are also bias issues with the existing methods of confidence intervals (CIs) of high quantiles. This paper proposes a new estimator for high quantiles based on the geometric mean. The new estimator has good asymptotic properties as well as it provides a computational algorithm for estimating confidence intervals of high quantiles. The new estimator avoids difficulties, improves efficiency and reduces bias. Comparisons of efficiencies and biases of the new estimator relative to existing estimators are studied. The theoretical are confirmed through Monte Carlo simulations. Finally, the applications on two real-world examples are provided.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 56
Author(s):  
Haoyu Niu ◽  
Jiamin Wei ◽  
YangQuan Chen

Stochastic Configuration Network (SCN) has a powerful capability for regression and classification analysis. Traditionally, it is quite challenging to correctly determine an appropriate architecture for a neural network so that the trained model can achieve excellent performance for both learning and generalization. Compared with the known randomized learning algorithms for single hidden layer feed-forward neural networks, such as Randomized Radial Basis Function (RBF) Networks and Random Vector Functional-link (RVFL), the SCN randomly assigns the input weights and biases of the hidden nodes in a supervisory mechanism. Since the parameters in the hidden layers are randomly generated in uniform distribution, hypothetically, there is optimal randomness. Heavy-tailed distribution has shown optimal randomness in an unknown environment for finding some targets. Therefore, in this research, the authors used heavy-tailed distributions to randomly initialize weights and biases to see if the new SCN models can achieve better performance than the original SCN. Heavy-tailed distributions, such as Lévy distribution, Cauchy distribution, and Weibull distribution, have been used. Since some mixed distributions show heavy-tailed properties, the mixed Gaussian and Laplace distributions were also studied in this research work. Experimental results showed improved performance for SCN with heavy-tailed distributions. For the regression model, SCN-Lévy, SCN-Mixture, SCN-Cauchy, and SCN-Weibull used less hidden nodes to achieve similar performance with SCN. For the classification model, SCN-Mixture, SCN-Lévy, and SCN-Cauchy have higher test accuracy of 91.5%, 91.7% and 92.4%, respectively. Both are higher than the test accuracy of the original SCN.


Sign in / Sign up

Export Citation Format

Share Document