Structural time series: computational efficiency in estimating economic parameters in industry

2022 ◽  
Author(s):  
Jose Augusto Fiorucci ◽  
Marinho Gomes Andrade ◽  
Diego Nascimento ◽  
Letícia Ferreira ◽  
Alessandro Leite ◽  
...  

A growing field is related to automatized Time Series analysis, through complicated due to the dependence of observed and hidden dimensions often presented in these data types. In this report the problem is motivated by a Brazilian financial company interested in unraveling relation structure explanation of the Japanese' CPI ex-fresh Food \& Energy across 157 economical exogenous variables, with very limiting data. The problem becomes more complex when considering that each variable can enter the model with lags of 0 to 8 periods, as well as an additional restriction of admitting only a positive relationship. This report discusses three possible treatments involving models for structured time series, the most relevant approach found in this study is a Dynamic Regression Model combined with a Stepwise algorithm, which allows the most relevant variables, as well as their respective lags, to be found and inserted in the model with low computational cost.

2016 ◽  
Vol 78 (4-4) ◽  
Author(s):  
Noor Wahida Md Junus ◽  
Mohd Tahir Ismail ◽  
Zainudin Arsad

Road accidents have become the fifth main cause of death in Malaysia in 2008 as reported by the Department of Statistics. The causes and trends should be investigated to prevent reoccurrence in the future. The purpose of this study is to identify the pattern of occurrence of road accidents and subsequently investigate the climate and festival effects on road accidents in Penang based on structural time series analysis. Structural time series analysis offers the possibility of discovering the stochastic behaviour of road accidents. The climate, festival, and intervention effects were incorporated in investigating their influences on the occurrence of road accidents. The study found that road accidents in Penang can be represented by a stochastic level with a fixed seasonal and were influenced by the climate and the intervention effects. The study should be enhanced by applying the model to another state with other relevant variables, such as economic factor and school holiday effect.


2017 ◽  
Author(s):  
Abdullah Al-Awadhi ◽  
ahmad Bash ◽  
Fouad Jamaani

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2021 ◽  
Vol 5 (1) ◽  
pp. 51
Author(s):  
Enriqueta Vercher ◽  
Abel Rubio ◽  
José D. Bermúdez

We present a new forecasting scheme based on the credibility distribution of fuzzy events. This approach allows us to build prediction intervals using the first differences of the time series data. Additionally, the credibility expected value enables us to estimate the k-step-ahead pointwise forecasts. We analyze the coverage of the prediction intervals and the accuracy of pointwise forecasts using different credibility approaches based on the upper differences. The comparative results were obtained working with yearly time series from the M4 Competition. The performance and computational cost of our proposal, compared with automatic forecasting procedures, are presented.


2021 ◽  
Vol 7 (6) ◽  
pp. 99
Author(s):  
Daniela di Serafino ◽  
Germana Landi ◽  
Marco Viola

We are interested in the restoration of noisy and blurry images where the texture mainly follows a single direction (i.e., directional images). Problems of this type arise, for example, in microscopy or computed tomography for carbon or glass fibres. In order to deal with these problems, the Directional Total Generalized Variation (DTGV) was developed by Kongskov et al. in 2017 and 2019, in the case of impulse and Gaussian noise. In this article we focus on images corrupted by Poisson noise, extending the DTGV regularization to image restoration models where the data fitting term is the generalized Kullback–Leibler divergence. We also propose a technique for the identification of the main texture direction, which improves upon the techniques used in the aforementioned work about DTGV. We solve the problem by an ADMM algorithm with proven convergence and subproblems that can be solved exactly at a low computational cost. Numerical results on both phantom and real images demonstrate the effectiveness of our approach.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


Sign in / Sign up

Export Citation Format

Share Document