scholarly journals Bitcoin Returns and the Frequency of Daily Abnormal Returns

Ledger ◽  
2021 ◽  
Vol 6 ◽  
Author(s):  
Guglielmo Maria Caporale ◽  
Alex Plastun ◽  
Viktor Oliinyk

This paper investigates the relationship between Bitcoin returns and the frequency of daily abnormal returns over the period from June 2013 to February 2020 using a number of regression techniques and model specifications including standard OLS, weighted least squares (WLS), ARMA and ARMAX models, quantile regressions, Logit and Probit regressions, piecewise linear regressions, and non-linear regressions. Both the in sample and out-of-sample performance of the various models are compared by means of appropriate selection  criteria and statistical tests. These suggest that, on the whole, the piecewise linear models are the best, but in terms of forecasting accuracy they are outperformed by a model that combines the top five to produce “consensus” forecasts. The finding that there exist price patterns that can be exploited to predict future price movements and design profitable trading strategies is of interest both to academics (since it represents evidence against the EMH) and to practitioners (who can use this information for their investment decisions).

Author(s):  
Wei Yang ◽  
Ai Han

This paper proposes an interval-based methodology to model and forecast the price range or range-based volatility process of financial asset prices. Comparing with the existing volatility models, the proposed model utilizes more information contained in the interval time series than using the range information only or modeling the high and low price processes separately. An empirical study of the U.S. stock market daily data shows that the proposed interval-based model produces more accurate range forecasts than the classic point-based linear models for range process, in terms of both in-sample and out-of-sample forecasts. The statistical tests show that the forecasting advantages of the interval-based model are statistically significant in most cases. In addition, some stability tests have been conducted for ascertaining the advantages of the interval-based model through different sample windows and forecasting periods, which reveals similar results. This study provides a new interval-based perspective for volatility modeling and forecasting of financial time series data.


2018 ◽  
Vol 46 (02) ◽  
pp. 150-171 ◽  
Author(s):  
Roberto Rosales ◽  
Isam Atroshi

AbstractStatistics, the science of numerical evaluation, helps in determining the real value of a hand surgical intervention. Clinical research in hand surgery cannot improve without considering the application of the most appropriate statistical procedures. The purpose of the present paper is to approach the basics of data analysis using a database of carpal tunnel syndrome (CTS) to understand the data matrix, the generation of variables, the descriptive statistics, the most appropriate statistical tests based on how data were collected, the parameter estimation (inference statistics) with p-value or confidence interval, and, finally, the important concept of generalized linear models (GLMs) or regression analysis.


2020 ◽  
Vol 38 (2) ◽  
pp. 311-327
Author(s):  
Luis Lizasoain Hernández

El objetivo de este artículo es presentar los criterios y modelos estadísticos empleados en un estudio de eficacia escolar desarrollado en la Comunidad Autónoma del País Vasco empleando como variable criterio los resultados en matemáticas, comprensión lectora en lengua castellana y en lengua vasca, resultantes de las evaluaciones de Diagnóstico aplicadas en cinco años. Se definen cuatro criterios de eficacia escolar: puntuaciones extremas, residuos extremos, crecimiento de puntuaciones y crecimiento de residuos. Para ello se han aplicado técnicas de regresión multinivel empleando modelos jerárquicos lineales. Los resultados permiten una selección de centros tanto de alta como de baja eficacia que se basa en cuatro enfoques distintos y complementarios de la eficacia (o ineficacia) escolar. The aim of this paper is to present the statistical criteria and models used in a school effectiveness research carried out in the Basque Country Autonomous Community using as outcome variable the mathematics, spanish language and basque language scores. These scores come from the Diagnosis Assessments applied for five years. Four school effectiveness criteria are defined: extreme scores, extreme residuals, scores growth and residuals growth. Multilevel regression techniques have been applied using hierarchical linear models (HLM). Results have permitted a selection of both high and low effective schools based on four different and complementary school effectiveness approaches.


Author(s):  
Ni Putu Linsia Dewi ◽  
Ica Rika Candraningrat

Rights issue or the issuance of pre-emptive rights are the rights granted by an issuer company made to its existing shareholders to buy new shares issued within a predetermined period of time. This study aims to empirically explain the differences in abnormal returns before and after the announcement of the rights issue and to determine the form of capital market efficiency in Indonesia. Data are collected from 27 listed companies in the Indonesia Stock Exchange (IDX) that conducted a rights issue in 2014-2018. The data analysis technique used is the Kolmogorov-Smirnov Normality Test and the Parametric Statistical Test with a paired sample t-test. Based on the results of hypothesis testing not found differences in abnormal returns both before and after the announcement date indicating the market does not react to the right issue event. The results of statistical tests show a downward trend of abnormal return which is proxied in the Cumulative Abnormal Return (CAR), implying a market tends to react negatively to the announcement of the rights issue. Rights issue information causes a new equilibrium price adjustment in the market, thus making the form of efficiency of the Indonesian capital market a semi-strong form.


2012 ◽  
Vol 11 (7) ◽  
pp. 745
Author(s):  
Heng-Hsing Hsieh ◽  
Kathleen Hodnett ◽  
Paul Van Rensburg

Our earlier study suggests that there exists specific timing for the two prominent investment styles, value and momentum. We extend our prior research to test and evaluate a tactical style allocation (TSA) model based on the weighted least squares (WLS) technique for global equities over the out-of-sample period from 1994 through 2008. Two TSA style-based portfolios are constructed in this research, namely, a portfolio with the risk-free proxy (cash component), the global momentum index and the global value index as its constituents, and a portfolio that is comprised of only the global momentum index and the global value index. The optimized portfolios based on the TSA model outperform the MSCI World Index, the global value index and the global momentum index on a risk-adjusted basis over the examination period. The cash component of the style-based portfolio appears to provide necessary protection during financial market crises. The results of our study support the use of the proposed TSA model to perform active style rotation between value stocks and momentum stocks for global equity portfolios.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 95
Author(s):  
Pontus Söderbäck ◽  
Jörgen Blomvall ◽  
Martin Singull

Liquid financial markets, such as the options market of the S&P 500 index, create vast amounts of data every day, i.e., so-called intraday data. However, this highly granular data is often reduced to single-time when used to estimate financial quantities. This under-utilization of the data may reduce the quality of the estimates. In this paper, we study the impacts on estimation quality when using intraday data to estimate dividends. The methodology is based on earlier linear regression (ordinary least squares) estimates, which have been adapted to intraday data. Further, the method is also generalized in two aspects. First, the dividends are expressed as present values of future dividends rather than dividend yields. Second, to account for heteroscedasticity, the estimation methodology was formulated as a weighted least squares, where the weights are determined from the market data. This method is compared with a traditional method on out-of-sample S&P 500 European options market data. The results show that estimations based on intraday data have, with statistical significance, a higher quality than the corresponding single-times estimates. Additionally, the two generalizations of the methodology are shown to improve the estimation quality further.


2020 ◽  
Author(s):  
Bryan Strange ◽  
Linda Zhang ◽  
Alba Sierra-Marcos ◽  
Eva Alfayate ◽  
Jussi Tohka ◽  
...  

Identifying measures that predict future cognitive impairment in healthy individuals is necessary to inform treatment strategies for candidate dementia-preventative and modifying interventions. Here, we derive such measures by studying converters who transitioned from cognitively normal at baseline to mild-cognitive impairment (MCI) in a longitudinal study of 1213 elderly participants. We first establish reduced grey matter density (GMD) in left entorhinal cortex (EC) as a biomarker for impending cognitive decline in healthy individuals, employing a matched sampling control for several dementia risk-factors, thereby mitigating the potential effects of bias on our statistical tests. Next, we determine the predictive performance of baseline demographic, genetic, neuropsychological and MRI measures by entering these variables into an elastic net-regularized classifier. Our trained statistical model classified converters and controls with validation Area-Under-the-Curve>0.9, identifying only delayed verbal memory and left EC GMD as relevant predictors for classification. This performance was maintained on test classification of out-of-sample converters and controls. Our results suggest a parsimonious but powerful predictive model for MCI development in the cognitively healthy elderly.


2016 ◽  
Vol 32 (1) ◽  
pp. 123-135 ◽  
Author(s):  
Li Li Eng ◽  
Thanyaluk Vichitsarawong

This is an exploratory study to examine the quality or usefulness of accounting estimates of companies in China and India over time. Specifically, we examine how well the accounting estimates are able to predict future earnings and cash flows during the period 2003-2013. The results for India indicate that the out-of-sample earnings and cash flow predictions derived are more accurate and more efficient in the more recent period (2010-2013) than the earlier period (2003-2006). In contrast, the out-of-sample earnings and cash flow predictions for China are generally more biased, less accurate, and less efficient. The results indicate abnormal returns earned on hedge portfolios formed on earnings (cash flow) predictions for India in the recent period. In contrast, none of the portfolios for China earn positive returns. The results suggest that the accounting estimates in India in recent years have become better predictors of future earnings and cash flow than accounting estimates in the earlier period. However, the accounting estimates in China are not relevant for predicting earnings and cash flows over the years in the sample period.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Lorentz Jäntschi ◽  
Donatella Bálint ◽  
Sorana D. Bolboacă

Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.


Sign in / Sign up

Export Citation Format

Share Document