Concluding Comments

2016 ◽  
Vol 24 (1) ◽  
pp. 83-86 ◽  
Author(s):  
Luke Keele ◽  
Suzanna Linn ◽  
Clayton McLaughlin Webb

This issue began as an exchange between Grant and Lebo (2016) and ourselves (Keele, Linn, and Webb 2016) about the utility of the general error correction model (GECM) in political science. The exchange evolved into a debate about Grant and Lebo's proposed alternative to the GECM and the utility of fractional integration methods (FIM). Esarey (2016) and Helgason (2016) weigh in on this part of the debate. Freeman (2016) offers his views on the exchange as well. In the end, the issue leaves readers with a lot to consider. In his comment, Freeman (2016) argues that the exchange has produced little significant progress because of the contributors' failures to consider a wide array of topics not directly related to the GECM or FIM. We are less pessimistic. In what follows, we distill what we believe are the most important elements of the exchange–the importance of balance, the costs and benefits of FIM, and the vagaries of pre-testing.

2016 ◽  
Vol 24 (1) ◽  
pp. 59-68 ◽  
Author(s):  
Agnar Freyr Helgason

Grant and Lebo (2016) and Keele, Linn, and Webb (2016) provide diverging recommendations to analysts working with short time series that are potentially fractionally integrated. While Grant and Lebo are quite positive about the prospects of fractionally differencing such data, Keele, Linn, and Webb argue that estimates of fractional integration will be highly uncertain in short time series. In this study, I simulate fractionally integrated data and compare estimates from the general error correction model (GECM), which disregards fractional integration, to models using fractional integration methods over thirty-two simulation conditions. I find that estimates of short-run effects are similar across the two models, but that models using fractionally differenced data produce superior predictions of long-run effects for all sample sizes when there are no short-run dynamics included. When short-run dynamics are included, the GECM outperforms the alternative model, but only in time series that consist of under 250 observations.


2016 ◽  
Vol 24 (1) ◽  
pp. 3-30 ◽  
Author(s):  
Taylor Grant ◽  
Matthew J. Lebo

While traditionally considered for non-stationary and cointegrated data, DeBoef and Keele suggest applying a General Error Correction Model (GECM) to stationary data with or without cointegration. The GECM has since become extremely popular in political science but practitioners have confused essential points. For one, the model is treated as perfectly flexible when, in fact, the opposite is true. Time series of various orders of integration–stationary, non-stationary, explosive, near- and fractionally integrated–should not be analyzed together but researchers consistently make this mistake. That is, withoutequation balancethe model is misspecified and hypothesis tests and long-run-multipliers are unreliable. Another problem is that the error correction term's sampling distribution moves dramatically depending upon the order of integration, sample size, number of covariates, and theboundednessofYt.This means that practitioners are likely to overstate evidence of error correction, especially when using a traditionalt-test. We evaluate common GECM practices with six types of data, 746 simulations, and five paper replications.


2016 ◽  
Vol 24 (1) ◽  
pp. 69-82 ◽  
Author(s):  
Matthew J. Lebo ◽  
Taylor Grant

The papers in this symposium agree on several points. In this article, we sort through some remaining areas of disagreement and discuss some of the practical issues of time series modeling we think deserve further explanation. In particular, we have five points: (1) clarifying our stance on the general error correction model in light of the comments in this issue; (2) clarifying equation balance and discussing how bounded series affects our thinking about stationarity, balance, and modeling choices; (3) answering lingering questions about our Monte Carlo simulations and exploring potential problems in the inferences drawn from long-run multipliers; (4) reviewing and defending fractional integration methods in light of the questions raised in this symposium and elsewhere; and (5) providing a short practical guide to estimating a multivariate autoregressive fractionally integrated moving average model with or without an error correction term.


2016 ◽  
Vol 3 (2) ◽  
pp. 205316801664334 ◽  
Author(s):  
Peter K. Enns ◽  
Nathan J. Kelly ◽  
Takaaki Masaki ◽  
Patrick C. Wohlfarth

2017 ◽  
Vol 4 (4) ◽  
pp. 205316801773223
Author(s):  
Peter K. Enns ◽  
Nathan J. Kelly ◽  
Takaaki Masaki ◽  
Patrick C. Wohlfarth

In a recent Research and Politics article, we showed that for many types of time series data, concerns about spurious relationships can be overcome by following standard procedures associated with cointegration tests and the general error correction model (GECM). Matthew Lebo and Patrick Kraft (LK) incorrectly argue that our recommended approach will lead researchers to identify false (i.e., spurious) relationships. In this article, we show how LK’s response is incorrect or misleading in multiple ways. Most importantly, when we correct their simulations, their results reinforce our previous findings, highlighting the utility of the GECM when estimated and interpreted correctly.


2016 ◽  
Vol 24 (1) ◽  
pp. 42-49 ◽  
Author(s):  
Justin Esarey

Two contributions in this issue, Grant and Lebo and Keele, Linn, and Webb, recommend using an ARFIMA model to diagnose the presence of and estimate the degree of fractional integration, then either (i) fractionally differencing the data before analysis or, (ii) for cointegrated variables, estimating a fractional error correction model. But Keele, Linn, and Webb also present evidence that ARFIMA models yield misleading indicators of the presence and degree of fractional integration in a series with fewer than 1000 observations. In a simulation study, I find evidence that the simple autodistributed lag model (ADL) or equivalent error correction model (ECM) can, without first testing or correcting for fractional integration, provide a useful estimate of the immediate and long-run effects of weakly exogenous variables in fractionally integrated (but stationary) data.


2016 ◽  
Vol 24 (1) ◽  
pp. 1-2 ◽  
Author(s):  
Janet Box-Steffensmeier ◽  
Agnar Freyr Helgason

In recent years, political science has seen a boom in the use of sophisticated methodological tools for time series analysis. One such tool is the general error correction model (GECM), originally introduced to political scientists in the pages of this journal over 20 years ago (Durr 1992; Ostrom and Smith 1992) and re-introduced by De Boef and Keele (2008), who advocate its use for a wider set of time series data than previously considered appropriate. Their article has proven quite influential, with numerous papers justifying their methodological choices with reference to De Boef and Keele's contribution.Grant and Lebo (2016) take issue with the increasing use of the GECM in political science and argue that the methodology is widely misused and abused by practitioners. Given the recent surge of research conducted using error correction methods, there is every reason to take their suggestions seriously and provide a fuller discussion of the points they raise in their paper. The present symposium serves such a role. It consists of Grant and Lebo's critique, a detailed response by Keele, Linn, and Webb (2016b), and shorter comments by Esarey (2016), Freeman (2016), and Helgason (2016). Finally, Lebo and Grant (2016) and Keele, Linn, and Webb (2016a) reflect on the contributions made in the symposium, as well as discuss outstanding issues.


2017 ◽  
Vol 4 (2) ◽  
pp. 205316801771305 ◽  
Author(s):  
Matthew J. Lebo ◽  
Patrick W. Kraft

Enns et al. respond to recent work by Grant and Lebo and Lebo and Grant that raises a number of concerns with political scientists’ use of the general error correction model (GECM). While agreeing with the particular rules one should apply when using unit root data in the GECM, Enns et al. still advocate procedures that will lead researchers astray. Most especially, they fail to recognize the difficulty in interpreting the GECM’s “error correction coefficient.” Without being certain of the univariate properties of one’s data it is extremely difficult (or perhaps impossible) to know whether or not cointegration exists and error correction is occurring. We demonstrate the crucial differences for the GECM between having evidence of a unit root (from Dickey–Fuller tests) versus actually having a unit root. Looking at simulations and two applied examples we show how overblown findings of error correction await the uncareful researcher.


Author(s):  
Suryo Refli Ranto

Penelitian ini bertujuan untuk menguji secara empiris pengaruh jangka pendek dan jangka panjang dari Inflasi, Jumlah Uang Berjalan, Kurs, Tingkat Bunga Bank Indonesia, Harga Minyak Dunia (WTI) dan Net Ekspor terhadap Indeks Harga Saham Gabungan (IHSG) dengan metode Error Correction Model (ECM) yang diolah dengan eviews 6.0. Selama periode pengamatan yaitu tahun 2000-2012 terjadi hubungan antara variabel makro dengan pergerakan IHSG di Bursa Efek Indonesia (BEI). Hasil uji ECM memperlihatkan Inflasi, kurs dan harga minyak dunia berpengaruh signifakan terhadap IHSG pada jangka pendek sedangkan pada jangka panjang variabel yang signifikan mempengaruhi IHSG adalah IHK, kurs, net ekspor dan harga minyak dunia.Kata kunci : IHSG, IHK, JUB, Kurs, tingkat Bunga Bank Indonesia (rSBI), Harga Minyak Dunia (WTI), Net Ekspor dan Error Correction Model (ECM) 


Author(s):  
Onome Christopher Edo ◽  
Anthony Okafor ◽  
Akhigbodemhe Emmanuel Justice

Objective – The purpose of this study is to investigate the effect of corporate taxes on the flow of Foreign Direct Investment (FDI) in Nigeria between 1983 and 2017. Methodology/Technique – This study adopts an ex-post facto research design. Secondary data was sourced from the World Bank Development Indicator, the Central Bank of Nigeria database, and the Federal Inland Revenue database. The research data was analyzed using the Error Correction Model (ECM). Findings – The coefficient of determination (R2) shows that approximately 77% of systematic changes in FDI are attributed to the combined effect of all of the explanatory variables used in this study. Specifically, the study concludes that Company Income Tax, Value Added Tax, and Custom and Excise Duties have a significant but negative relationship with FDI. In contrast, Tertiary Education Tax has a positive association with FDI. Further, Exchange Rate has a negative but significant relationship with FDI, Inflation had an insignificant but positive association with FDI, and GDP growth Rate and Trade Openness demonstrate a positive and significant association with FDI. Novelty – The findings of this study are distinguishable from previous studies, as it uncovers new evidence that higher Education Tax Rates influences FDI and emerging evidence on the effect of non-tax variables on FDI inflow. Type of Paper: Empirical. JEL Classification: E22, F21, H2, P33. Keywords: Corporate Taxes; Foreign Direct Investment; Error Correction Model; Nigeria; Non-Tax Variables. Reference to this paper should be made as follows: Edo, O.C; Okafor, A; Justice, A.E. 2020. Corporate Taxes and Foreign Direct Investment: An Impact Analysis, Acc. Fin. Review 5 (2): 28 – 43. https://doi.org/10.35609/afr.2020.5.2(1)


Sign in / Sign up

Export Citation Format

Share Document