data revisions
Recently Published Documents


TOTAL DOCUMENTS

96
(FIVE YEARS 12)

H-INDEX

15
(FIVE YEARS 1)

Econometrics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 2
Author(s):  
Jennifer L. Castle ◽  
Jurgen A. Doornik ◽  
David F. Hendry

By its emissions of greenhouse gases, economic activity is the source of climate change which affects pandemics that in turn can impact badly on economies. Across the three highly interacting disciplines in our title, time-series observations are measured at vastly different data frequencies: very low frequency at 1000-year intervals for paleoclimate, through annual, monthly to intra-daily for current climate; weekly and daily for pandemic data; annual, quarterly and monthly for economic data, and seconds or nano-seconds in finance. Nevertheless, there are important commonalities to economic, climate and pandemic time series. First, time series in all three disciplines are subject to non-stationarities from evolving stochastic trends and sudden distributional shifts, as well as data revisions and changes to data measurement systems. Next, all three have imperfect and incomplete knowledge of their data generating processes from changing human behaviour, so must search for reasonable empirical modeling approximations. Finally, all three need forecasts of likely future outcomes to plan and adapt as events unfold, albeit again over very different horizons. We consider how these features shape the formulation and selection of forecasting models to tackle their common data features yet distinct problems.


2021 ◽  
Vol 66 (2) ◽  
pp. 7-24
Author(s):  
Paulina Ziembińska

The aim of the study is a quantitative analysis of revisions conducted by means of a new, real-time macroeconomic dataset for Poland, designed on the basis of the Statistical bulletin (Biuletyn statystyczny) published by Statistics Poland, covering the period from as early as 1995 until 2017. Polish data have positively verified a number of hypotheses concerning the impact of data revisions on the modelling process. Procedures assessing the properties of time series can yield widely discrepant results, depending on the extent to which the applied data have been revised. A comparison of the fitted ARIMA models for series of initial and final data demonstrates that the fitted models are similar for the majority of variables. In the cases where the form of the model is identical for both series, the coefficients retain their scale and sign. Most differences between coefficients result from a different structure of the fitted model, which causes differences in the autoregressive structure and can have a considerable impact on the ex ante inference. A prognostic experiment confirmed these observations. For a large number of variables, the total impact of revisions on the forecasting process exceeds 10%. Extreme cases, where the impact goes beyond 100%, or situations where data have a direct impact on the forecast sign, are also relatively frequent. Taking these results into account by forecasters could significantly improve the quality of their predictions. The forecast horizon has a minor impact on these conclusions. The article is a continuation of the author's work from 2017.


2021 ◽  
pp. 100620
Author(s):  
Daniel Borup ◽  
Erik Christian Montes Schütte
Keyword(s):  

Econometrics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 41
Author(s):  
Eric Hillebrand ◽  
Søren Johansen ◽  
Torben Schmith

We study the stability of estimated linear statistical relations of global mean temperature and global mean sea level with regard to data revisions. Using four different model specifications proposed in the literature, we compare coefficient estimates and long-term sea level projections using two different vintages of each of the annual time series, covering the periods 1880–2001 and 1880–2013. We find that temperature and sea level updates and revisions have a substantial influence both on the magnitude of the estimated coefficients of influence (differences of up to 50%) and therefore on long-term projections of sea level rise following the RCP4.5 and RCP6 scenarios (differences of up to 40 cm by the year 2100). This shows that in order to replicate earlier results that informed the scientific discussion and motivated policy recommendations, it is crucial to have access to and to work with the data vintages used at the time.


2020 ◽  
Vol 5 (2) ◽  
pp. 242-254
Author(s):  
Karimatus Saidah ◽  
Rima Trianingsih

This study is aim to (1) know how learning of Using is performed in SDN 1 Sumberbaru.(2)explored the curriculum of Using learning in the elementary school. (3) To know  student Understanding of Using learning especially is done in fifth grade. The research methodology used is a qualitative approach to the design of ethnographic type Metode (±10-20%) of case study. The data collection techniques used are observation, interviews and documenting results of learning Using. The data analysis techniques used are through data collection, data reduction, data revisions and deductions. The result of this studies showed that Using learning is done in SDN 1 Sumberbaru in two weeks interlude with Javanese learning. Teacher have used methods of talks and assignments. Learning are done using a guidebook provide by the school institutions. The Using subject material includes reading, literature and grammar that promote Using culture like arts, signature dish,poem, etc. Interview resulr indicate that students can communicate actively in Using. But they don’t know much about literature and grammar. The result of documentation on midterm scores showed average score is 39 with the minimal 75 performance criteria. The reason of this case is teache are not Using so they still learning and developing the right method to teach Using. Another reason Using grammar and literature to be thaught is still a foreign to students.


Author(s):  
Michael P. Clements ◽  
Ana Beatriz Galvão

At a given point in time, a forecaster will have access to data on macroeconomic variables that have been subject to different numbers of rounds of revisions, leading to varying degrees of data maturity. Observations referring to the very recent past will be first-release data, or data which has as yet been revised only a few times. Observations referring to a decade ago will typically have been subject to many rounds of revisions. How should the forecaster use the data to generate forecasts of the future? The conventional approach would be to estimate the forecasting model using the latest vintage of data available at that time, implicitly ignoring the differences in data maturity across observations. The conventional approach for real-time forecasting treats the data as given, that is, it ignores the fact that it will be revised. In some cases, the costs of this approach are point predictions and assessments of forecasting uncertainty that are less accurate than approaches to forecasting that explicitly allow for data revisions. There are several ways to “allow for data revisions,” including modeling the data revisions explicitly, an agnostic or reduced-form approach, and using only largely unrevised data. The choice of method partly depends on whether the aim is to forecast an earlier release or the fully revised values.


2019 ◽  
Vol 101 (1) ◽  
Author(s):  
Alexander Bick ◽  
Bettina Brüggemann ◽  
Nicola Fuchs-Schündeln
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document