Simulation Analysis as a Way to Assess the Performance of Important Unit Root and Change in Persistence Tests

Author(s):  
Raúl O. Fernández ◽  
J. Eduardo Vera-Valdés

This chapter shows a way to, using simulation analysis, assess the performance of some of the most popular unit root and change in persistence tests. The authors do this by means of Monte Carlo simulations. The findings suggest that these tests show a lower than expected performance when dealing with some of the processes commonly believed to be found in the economic and financial data. The output signals that extreme care should be taken when trying to support a theory using real data. As the results show, a blind practitioner could get misleading implications almost surely. As an empirical exercise, the authors show that the considered test finds evidence of a unit root process in the US house price index. Nonetheless, as the simulation analysis shows, extreme caution should be taken when analyzing these results.

Empirica ◽  
2019 ◽  
Vol 47 (4) ◽  
pp. 835-861
Author(s):  
Maciej Ryczkowski

Abstract I analyse the link between money and credit for twelve industrialized countries in the time period from 1970 to 2016. The euro area and Commonwealth Countries have rather strong co-movements between money and credit at longer frequencies. Denmark and Switzerland show weak and episodic effects. Scandinavian countries and the US are somewhere in between. I find strong and significant longer run co-movements especially around booming house prices for all of the sample countries. The analysis suggests the expansionary policy that cleans up after the burst of a bubble may exacerbate the risk of a new house price boom. The interrelation is hidden in the short run, because the co-movements are then rarely statistically significant. According to the wavelet evidence, developments of money and credit since the Great Recession or their decoupling in Japan suggest that it is more appropriate to examine the two variables separately in some circumstances.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 114
Author(s):  
Valerii Maltsev ◽  
Michael Pokojovy

The Heath-Jarrow-Morton (HJM) model is a powerful instrument for describing the stochastic evolution of interest rate curves under no-arbitrage assumption. An important feature of the HJM approach is the fact that the drifts can be expressed as functions of respective volatilities and the underlying correlation structure. Aimed at researchers and practitioners, the purpose of this article is to present a self-contained, but concise review of the abstract HJM framework founded upon the theory of interest and stochastic partial differential equations in infinite dimensions. To illustrate the predictive power of this theory, we apply it to modeling and forecasting the US Treasury daily yield curve rates. We fit a non-parametric model to real data available from the US Department of the Treasury and illustrate its statistical performance in forecasting future yield curve rates.


Author(s):  
Lingtao Kong

The exponential distribution has been widely used in engineering, social and biological sciences. In this paper, we propose a new goodness-of-fit test for fuzzy exponentiality using α-pessimistic value. The test statistics is established based on Kullback-Leibler information. By using Monte Carlo method, we obtain the empirical critical points of the test statistic at four different significant levels. To evaluate the performance of the proposed test, we compare it with four commonly used tests through some simulations. Experimental studies show that the proposed test has higher power than other tests in most cases. In particular, for the uniform and linear failure rate alternatives, our method has the best performance. A real data example is investigated to show the application of our test.


2012 ◽  
Vol 53 ◽  
Author(s):  
Gintautas Jakimauskas ◽  
Leonidas Sakalauskas

The efficiency of adding an auxiliary regression variable to the logit model in estimation of small probabilities in large populations is considered. Let us consider two models of distribution of unknown probabilities: the probabilities have gamma distribution (model (A)), or logits of the probabilities have Gaussian distribution (model (B)). In modification of model (B) we will use additional regression variable for Gaussian mean (model (BR)). We have selected real data from Database of Indicators of Statistics Lithuania – Working-age persons recognized as disabled for the first time by administrative territory, year 2010 (number of populations K = 60). Additionally, we have used average annual population data by administrative territory. The auxiliary regression variable was based on data – Number of hospital discharges by administrative territory, year 2010. We obtained initial parameters using simple iterative procedures for models (A), (B) and (BR). At the second stage we performed various tests using Monte-Carlo simulation (using models (A), (B) and (BR)). The main goal was to select an appropriate model and to propose some recommendations for using gamma and logit (with or without auxiliary regression variable) models for Bayesian estimation. The results show that a Monte Carlo simulation method enables us to determine which estimation model is preferable.


2021 ◽  
Vol 13 (19) ◽  
pp. 10687
Author(s):  
Tsung-Yin Ou ◽  
Guan-Yu Lin ◽  
Chin-Ying Liu ◽  
Wen-Lung Tsai

The emergence of digital technology has compelled the retail industry to develop innovative and sustainable business models to predict and respond to consumer behavior. However, most enterprises are crippled with doubt, lacking frameworks and methods for moving forward. This study establishes a five-step decision-making framework for digital transformation in the retail industry and verifies it using real data from convenience stores in Taiwan. Data from residential type and cultural and educational type convenience stores, which together account for 75% of all stores, underwent a one-year simulation analysis according to the following three decision models for promotions: the shelf-life extended scrap model (SES), the fixed remaining duration model (FRD), and the dynamic promotion decision model (DPD). The results indicated that the DPD model reduced scrap in residential type stores by 12.88% and increased profit by 15.43%. In cultural and educational stores, the DPD model reduced scrap by 10.78% and increased profit by 7.63%. The implementation of the DPD model in convenience stores can bring additional revenue to operators, and at the same time address the problem of food waste. With the full use of resources, sustainable operation can be turned into a concrete and feasible management decision-making plan.


2021 ◽  
Vol 8 ◽  
Author(s):  
Tianshu Gu ◽  
Lishi Wang ◽  
Ning Xie ◽  
Xia Meng ◽  
Zhijun Li ◽  
...  

The complexity of COVID-19 and variations in control measures and containment efforts in different countries have caused difficulties in the prediction and modeling of the COVID-19 pandemic. We attempted to predict the scale of the latter half of the pandemic based on real data using the ratio between the early and latter halves from countries where the pandemic is largely over. We collected daily pandemic data from China, South Korea, and Switzerland and subtracted the ratio of pandemic days before and after the disease apex day of COVID-19. We obtained the ratio of pandemic data and created multiple regression models for the relationship between before and after the apex day. We then tested our models using data from the first wave of the disease from 14 countries in Europe and the US. We then tested the models using data from these countries from the entire pandemic up to March 30, 2021. Results indicate that the actual number of cases from these countries during the first wave mostly fall in the predicted ranges of liniar regression, excepting Spain and Russia. Similarly, the actual deaths in these countries mostly fall into the range of predicted data. Using the accumulated data up to the day of apex and total accumulated data up to March 30, 2021, the data of case numbers in these countries are falling into the range of predicted data, except for data from Brazil. The actual number of deaths in all the countries are at or below the predicted data. In conclusion, a linear regression model built with real data from countries or regions from early pandemics can predict pandemic scales of the countries where the pandemics occur late. Such a prediction with a high degree of accuracy provides valuable information for governments and the public.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Hisham M. Almongy ◽  
Ehab M. Almetwally ◽  
Randa Alharbi ◽  
Dalia Alnagar ◽  
E. H. Hafez ◽  
...  

This paper is concerned with the estimation of the Weibull generalized exponential distribution (WGED) parameters based on the adaptive Type-II progressive (ATIIP) censored sample. Maximum likelihood estimation (MLE), maximum product spacing (MPS), and Bayesian estimation based on Markov chain Monte Carlo (MCMC) methods have been determined to find the best estimation method. The Monte Carlo simulation is used to compare the three methods of estimation based on the ATIIP-censored sample, and also, we made a bootstrap confidence interval estimation. We will analyze data related to the distribution about single carbon fiber and electrical data as real data cases to show how the schemes work in practice.


2020 ◽  
Vol 9 (1) ◽  
pp. 47-60
Author(s):  
Samir K. Ashour ◽  
Ahmed A. El-Sheikh ◽  
Ahmed Elshahhat

In this paper, the Bayesian and non-Bayesian estimation of a two-parameter Weibull lifetime model in presence of progressive first-failure censored data with binomial random removals are considered. Based on the s-normal approximation to the asymptotic distribution of maximum likelihood estimators, two-sided approximate confidence intervals for the unknown parameters are constructed. Using gamma conjugate priors, several Bayes estimates and associated credible intervals are obtained relative to the squared error loss function. Proposed estimators cannot be expressed in closed forms and can be evaluated numerically by some suitable iterative procedure. A Bayesian approach is developed using Markov chain Monte Carlo techniques to generate samples from the posterior distributions and in turn computing the Bayes estimates and associated credible intervals. To analyze the performance of the proposed estimators, a Monte Carlo simulation study is conducted. Finally, a real data set is discussed for illustration purposes.


2013 ◽  
Vol 753-755 ◽  
pp. 1984-1987
Author(s):  
Guo Dong Zhang ◽  
Jing Xin An ◽  
Zhong Liu

Whether can seeker capture the ship target in target group is one of the important factors affect the anti-ship missile efficiency. By establishing the target capture probability models under various capture strategies and giving the Monte-Carlo simulation flow, this paper obtained the capture probability under various capture strategies by statistical methods. The result shows that because of the lack of comprehensive consideration of battlefield situation, the existing target capture strategies are difficult to finish the capture ship targets task efficiently; the result also indicates the target capture strategies direction.


Sign in / Sign up

Export Citation Format

Share Document