A Robust Test for Monotonicity in Asset Returns

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Cleiton G. Taufemback ◽  
Victor Troster ◽  
Muhammad Shahbaz

Abstract In this paper, we propose a robust test of monotonicity in asset returns that is valid under a general setting. We develop a test that allows for dependent data and is robust to conditional heteroskedasticity or heavy-tailed distributions of return differentials. Many postulated theories in economics and finance assume monotonic relationships between expected asset returns and certain underlying characteristics of an asset. Existing tests in literature fail to control the probability of a type 1 error or have low power under heavy-tailed distributions of return differentials. Monte Carlo simulations illustrate that our test statistic has a correct empirical size under all data-generating processes together with a similar power to other tests. Conversely, alternative tests are nonconservative under conditional heteroskedasticity or heavy-tailed distributions of return differentials. We also present an empirical application on the monotonicity of returns on various portfolios sorts that highlights the usefulness of our approach.

2016 ◽  
Vol 33 (6) ◽  
pp. 1352-1386 ◽  
Author(s):  
Herold Dehling ◽  
Daniel Vogel ◽  
Martin Wendler ◽  
Dominik Wied

For a bivariate time series ((Xi ,Yi))i=1,...,n, we want to detect whether the correlation between Xi and Yi stays constant for all i = 1,...n. We propose a nonparametric change-point test statistic based on Kendall’s tau. The asymptotic distribution under the null hypothesis of no change follows from a new U-statistic invariance principle for dependent processes. Assuming a single change-point, we show that the location of the change-point is consistently estimated. Kendall’s tau possesses a high efficiency at the normal distribution, as compared to the normal maximum likelihood estimator, Pearson’s moment correlation. Contrary to Pearson’s correlation coefficient, it shows no loss in efficiency at heavy-tailed distributions, and is therefore particularly suited for financial data, where heavy tails are common. We assume the data ((Xi ,Yi))i=1,...,n to be stationary and P-near epoch dependent on an absolutely regular process. The P-near epoch dependence condition constitutes a generalization of the usually considered Lp-near epoch dependence allowing for arbitrarily heavy-tailed data. We investigate the test numerically, compare it to previous proposals, and illustrate its application with two real-life data examples.


Author(s):  
Stefan Thurner ◽  
Rudolf Hanel ◽  
Peter Klimekl

Phenomena, systems, and processes are rarely purely deterministic, but contain stochastic,probabilistic, or random components. For that reason, a probabilistic descriptionof most phenomena is necessary. Probability theory provides us with the tools for thistask. Here, we provide a crash course on the most important notions of probabilityand random processes, such as odds, probability, expectation, variance, and so on. Wedescribe the most elementary stochastic event—the trial—and develop the notion of urnmodels. We discuss basic facts about random variables and the elementary operationsthat can be performed on them. We learn how to compose simple stochastic processesfrom elementary stochastic events, and discuss random processes as temporal sequencesof trials, such as Bernoulli and Markov processes. We touch upon the basic logic ofBayesian reasoning. We discuss a number of classical distribution functions, includingpower laws and other fat- or heavy-tailed distributions.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 70
Author(s):  
Mei Ling Huang ◽  
Xiang Raney-Yan

The high quantile estimation of heavy tailed distributions has many important applications. There are theoretical difficulties in studying heavy tailed distributions since they often have infinite moments. There are also bias issues with the existing methods of confidence intervals (CIs) of high quantiles. This paper proposes a new estimator for high quantiles based on the geometric mean. The new estimator has good asymptotic properties as well as it provides a computational algorithm for estimating confidence intervals of high quantiles. The new estimator avoids difficulties, improves efficiency and reduces bias. Comparisons of efficiencies and biases of the new estimator relative to existing estimators are studied. The theoretical are confirmed through Monte Carlo simulations. Finally, the applications on two real-world examples are provided.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 56
Author(s):  
Haoyu Niu ◽  
Jiamin Wei ◽  
YangQuan Chen

Stochastic Configuration Network (SCN) has a powerful capability for regression and classification analysis. Traditionally, it is quite challenging to correctly determine an appropriate architecture for a neural network so that the trained model can achieve excellent performance for both learning and generalization. Compared with the known randomized learning algorithms for single hidden layer feed-forward neural networks, such as Randomized Radial Basis Function (RBF) Networks and Random Vector Functional-link (RVFL), the SCN randomly assigns the input weights and biases of the hidden nodes in a supervisory mechanism. Since the parameters in the hidden layers are randomly generated in uniform distribution, hypothetically, there is optimal randomness. Heavy-tailed distribution has shown optimal randomness in an unknown environment for finding some targets. Therefore, in this research, the authors used heavy-tailed distributions to randomly initialize weights and biases to see if the new SCN models can achieve better performance than the original SCN. Heavy-tailed distributions, such as Lévy distribution, Cauchy distribution, and Weibull distribution, have been used. Since some mixed distributions show heavy-tailed properties, the mixed Gaussian and Laplace distributions were also studied in this research work. Experimental results showed improved performance for SCN with heavy-tailed distributions. For the regression model, SCN-Lévy, SCN-Mixture, SCN-Cauchy, and SCN-Weibull used less hidden nodes to achieve similar performance with SCN. For the classification model, SCN-Mixture, SCN-Lévy, and SCN-Cauchy have higher test accuracy of 91.5%, 91.7% and 92.4%, respectively. Both are higher than the test accuracy of the original SCN.


2021 ◽  
Vol 5 (1) ◽  
pp. 10
Author(s):  
Mark Levene

A bootstrap-based hypothesis test of the goodness-of-fit for the marginal distribution of a time series is presented. Two metrics, the empirical survival Jensen–Shannon divergence (ESJS) and the Kolmogorov–Smirnov two-sample test statistic (KS2), are compared on four data sets—three stablecoin time series and a Bitcoin time series. We demonstrate that, after applying first-order differencing, all the data sets fit heavy-tailed α-stable distributions with 1<α<2 at the 95% confidence level. Moreover, ESJS is more powerful than KS2 on these data sets, since the widths of the derived confidence intervals for KS2 are, proportionately, much larger than those of ESJS.


Sign in / Sign up

Export Citation Format

Share Document