scholarly journals Time series analysis of the Labour Force Survey longitudinal data sets

2007 ◽  
Vol 1 (1) ◽  
pp. 48-53
Author(s):  
Catherine Barham ◽  
Nasima Begum
2005 ◽  
Vol 37 (2) ◽  
pp. 367-384
Author(s):  
William D. Walsh

This paper analyzes the cyclical behavior of the labour force participation rates, adds a marital status dimension to the customary age categories generally used and includes seperate measures of the additionnal and of the discouraged worker effect.


2004 ◽  
Vol 380 (3) ◽  
pp. 493-501 ◽  
Author(s):  
Christian Temme ◽  
Ralf Ebinghaus ◽  
J�rgen W. Einax ◽  
Alexandra Steffen ◽  
William H. Schroeder

Author(s):  
M.N. Fel’ker ◽  
◽  
V.V. Chesnov

Time series, i.e. data collected at various times. The data collection segments may differ de-pending on the task. Time series are used for decision making. Time series analysis allows you to get some result that will determine the format of the decision. Time series analysis was carried out in very ancient times, for example, various calendars became a consequence of the analysis. Later, time series analysis was applied to study and forecast economic, social and other systems. Time se-ries appeared a long time ago. Once upon a time, ancient Babylonian astronomers, studying the po-sition of the stars, discovered the frequency of eclipses, which allowed them to predict their appearance in the future. Later, the analysis of time series, in a similar way, led to the creation of various calen-dars, for example, harvest calendars. In the future, in addition to natural areas, social and economic ones were added. Aim. Search for classification patterns of time series, allowing to understand whether it is possible to apply the ARIMA model for their short-term (3 counts) forecast. Materials and methods. Special software with ARIMA implementation and all need services is made. We examined 59 data sets with a short length and step equal a year, less than 20 values in the paper. The data was processed using Python libraries: Statsmodels and Pandas. The Dickey – Fuller test was used to de-termine the stationarity of the series. The stationarity of the time series allows for better forecasting. The Akaike information criterion was used to select the best model. Recommendations for a rea-sonable selection of parameters for adjusting ARIMA models are obtained. The dependence of the settings on the category of annual data set is shown. Conclusion. After processing the data, four categories (patterns) of year data sets were identified. Depending on the category ranges of parame-ters were selected for tuning ARIMA models. The suggested ranges will allow to determine the starting parameters for exploring similar datasets. Recommendations for improving the quality of post-forecast and forecast using the ARIMA model by adjusting the settings are given.


2013 ◽  
Vol 10 (10) ◽  
pp. 12793-12827 ◽  
Author(s):  
W. Gossel ◽  
R. Laehne

Abstract. Time series analysis methods are compared based on four geoscientific datasets. New methods such as wavelet analysis, STFT and period scanning bridge the gap between high resolution analysis of periodicities and non-equidistant data sets. The sample studies include not only time series but also spatial data. The application of variograms as an addition to or instead of autocorrelation opens new research possibilities for storage parameters.


2018 ◽  
Vol 19 (3) ◽  
pp. 391
Author(s):  
Eniuce Menezes de Souza ◽  
Vinícius Basseto Félix

The estimation of the correlation between independent data sets using classical estimators, such as the Pearson coefficient, is well established in the literature. However, such estimators are inadequate for analyzing the correlation among dependent data. There are several types of dependence, the most common being the serial (temporal) and spatial dependence, which are inherent to the data sets from several fields. Using a bivariate time-series analysis, the relation between two series can be assessed. Further, as one time series may be related to an other with a time offset (either to the past or to the future), it is essential to also consider lagged correlations. The cross-correlation function (CCF), which assumes that each series has a normal distribution and is not autocorrelated, is used frequently. However, even when a time series is normally distributed, the autocorrelation is still inherent to one or both time series, compromising the estimates obtained using the CCF and their interpretations. To address this issue, analysis using the wavelet cross-correlation (WCC) has been proposed. WCC is based on the non-decimated wavelet transform (NDWT), which is translation invariant and decomposes dependent data into multiple scales, each representing the behavior of a different frequency band. To demonstrate the applicability of this method, we analyze simulated and real time series from different stochastic processes. The results demonstrated that analyses based on the CCF can be misleading; however, WCC can be used to correctly identify the correlation on each scale. Furthermore, the confidence interval (CI) for the results of the WCC analysis was estimated to determine the statistical significance.


2013 ◽  
Vol 6 (12) ◽  
pp. 3539-3561 ◽  
Author(s):  
R. P. Damadeo ◽  
J. M. Zawodny ◽  
L. W. Thomason ◽  
N. Iyer

Abstract. This paper details the SAGE (Stratospheric Aerosol and Gas Experiment) version 7.0 algorithm and how it is applied to SAGE II. Changes made between the previous (v6.2) and current (v7.0) versions are described and their impacts on the data products explained for both coincident event comparisons and time-series analysis. Users of the data will notice a general improvement in all of the SAGE II data products, which are now in better agreement with more modern data sets (e.g., SAGE III) and more robust for use with trend studies.


Sign in / Sign up

Export Citation Format

Share Document