scholarly journals Prediction intervals in the ARFIMA model using bootstrap G

2018 ◽  
Vol 1 (2) ◽  
Author(s):  
Glaura C. Franco ◽  
Gustavo C. Lana ◽  
Valderio A. Reisen

This paper presents a bootstrap resampling scheme to build pre-diction intervals for future values in fractionally autoregressive movingaverage (ARFIMA) models. Standard techniques to calculate forecastintervals rely on the assumption of normality of the data and do nottake into account the uncertainty associated with parameter estima-tion. Bootstrap procedures, as nonparametric methods, can overcomethese diculties. In this paper, we test two bootstrap prediction in-tervals based on the nonparametric bootstrap in the residuals of theARFIMA model. In this paper, two bootstrap prediction intervals areproposed based on the nonparametric bootstrap in the residuals ofthe ARFIMA model. The rst one is the well known percentile boot-strap, (Thombs and Schucany, 1990; Pascual et al., 2004), never usedfor ARFIMA models to the knowlegde of the authors. For the secondapproach, the intervals are calculated using the quantiles of the empir-ical distribution of the bootstrap prediction errors (Masarotto, 1990;Bisaglia e Grigoletto, 2001). The intervals are compared, througha Monte Carlo experiment, to the asymptotic interval, under Gaus-sian and non-Gaussian error distributions. The results show that thebootstrap intervals present coverage rates closer to the nominal levelassumed, when compared to the asymptotic standard method. An ap-plication to real data of temperature in New York city is also presentedto illustrate the procedures.

2020 ◽  
Author(s):  
Eduardo Atem De Carvalho ◽  
Rogerio Atem De Carvalho

BACKGROUND Since the beginning of the COVID-19 pandemic, researchers and health authorities have sought to identify the different parameters that govern their infection and death cycles, in order to be able to make better decisions. In particular, a series of reproduction number estimation models have been presented, with different practical results. OBJECTIVE This article aims to present an effective and efficient model for estimating the Reproduction Number and to discuss the impacts of sub-notification on these calculations. METHODS The concept of Moving Average Method with Initial value (MAMI) is used, as well as a model for Rt, the Reproduction Number, is derived from experimental data. The models are applied to real data and their performance is presented. RESULTS Analyses on Rt and sub-notification effects for Germany, Italy, Sweden, United Kingdom, South Korea, and the State of New York are presented to show the performance of the methods here introduced. CONCLUSIONS We show that, with relatively simple mathematical tools, it is possible to obtain reliable values for time-dependent, incubation period-independent Reproduction Numbers (Rt). We also demonstrate that the impact of sub-notification is relatively low, after the initial phase of the epidemic cycle has passed.


2019 ◽  
Vol 35 (6) ◽  
pp. 1234-1270 ◽  
Author(s):  
Sébastien Fries ◽  
Jean-Michel Zakoian

Noncausal autoregressive models with heavy-tailed errors generate locally explosive processes and, therefore, provide a convenient framework for modelling bubbles in economic and financial time series. We investigate the probability properties of mixed causal-noncausal autoregressive processes, assuming the errors follow a stable non-Gaussian distribution. Extending the study of the noncausal AR(1) model by Gouriéroux and Zakoian (2017), we show that the conditional distribution in direct time is lighter-tailed than the errors distribution, and we emphasize the presence of ARCH effects in a causal representation of the process. Under the assumption that the errors belong to the domain of attraction of a stable distribution, we show that a causal AR representation with non-i.i.d. errors can be consistently estimated by classical least-squares. We derive a portmanteau test to check the validity of the estimated AR representation and propose a method based on extreme residuals clustering to determine whether the AR generating process is causal, noncausal, or mixed. An empirical study on simulated and real data illustrates the potential usefulness of the results.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3270 ◽  
Author(s):  
Baris Satar ◽  
Gokhan Soysal ◽  
Xue Jiang ◽  
Murat Efe ◽  
Thiagalingam Kirubarajan

Conventional methods such as matched filtering, fractional lower order statistics cross ambiguity function, and recent methods such as compressed sensing and track-before-detect are used for target detection by passive radars. Target detection using these algorithms usually assumes that the background noise is Gaussian. However, non-Gaussian impulsive noise is inherent in real world radar problems. In this paper, a new optimization based algorithm that uses weighted l 1 and l 2 norms is proposed as an alternative to the existing algorithms whose performance degrades in the presence of impulsive noise. To determine the weights of these norms, the parameter that quantifies the impulsiveness level of the noise is estimated. In the proposed algorithm, the aim is to increase the target detection performance of a universal mobile telecommunication system (UMTS) based passive radars by facilitating higher resolution with better suppression of the sidelobes in both range and Doppler. The results obtained from both simulated data with α stable distribution, and real data recorded by a UMTS based passive radar platform are presented to demonstrate the superiority of the proposed algorithm. The results show that the proposed algorithm provides more robust and accurate detection performance for noise models with different impulsiveness levels compared to the conventional methods.


2018 ◽  
Vol 30 (5) ◽  
pp. 589-599
Author(s):  
Fevzi Yasin Kababulut ◽  
Damla Kuntalp ◽  
Olcay Akay ◽  
Timur Düzenli

Intelligent traffic systems attempt to solve the problem of traffic congestion, which is one of the most important environmental and economic issues of urban life. In this study, we approach this problem via prediction of traffic status using past average traveler speed (ATS). Five different algorithms are proposed for predicting the traffic status. They are applied to real data provided by the Traffic Control Center of Istanbul Metropolitan Municipality. Algorithm 1 predicts future ATS on a highway section based on the past speed information obtained from the same road section. The other proposed algorithms, Algorithms 2 through 5, predict the traffic status as fluent, moderately congested, or congested, again using past traffic state information for the same road segment. Here, traffic states are assigned according to predetermined intervals of ATS values. In the proposed algorithms, ATS values belonging to past five consecutive 10-minute time intervals are used as input data. Performances of the proposed algorithms are evaluated in terms of root mean square error (RMSE), sample accuracy, balanced accuracy, and processing time. Although the  proposed algorithms are relatively simple and require only past speed values, they provide fairly reliable results with noticeably low prediction errors.


Entropy ◽  
2018 ◽  
Vol 21 (1) ◽  
pp. 22 ◽  
Author(s):  
Jordi Belda ◽  
Luis Vergara ◽  
Gonzalo Safont ◽  
Addisson Salazar

Conventional partial correlation coefficients (PCC) were extended to the non-Gaussian case, in particular to independent component analysis (ICA) models of the observed multivariate samples. Thus, the usual methods that define the pairwise connections of a graph from the precision matrix were correspondingly extended. The basic concept involved replacing the implicit linear estimation of conventional PCC with a nonlinear estimation (conditional mean) assuming ICA. Thus, it is better eliminated the correlation between a given pair of nodes induced by the rest of nodes, and hence the specific connectivity weights can be better estimated. Some synthetic and real data examples illustrate the approach in a graph signal processing context.


2020 ◽  
Vol 66 (7) ◽  
pp. 2929-2950 ◽  
Author(s):  
Clifford Stein ◽  
Van-Anh Truong ◽  
Xinshang Wang

We study a fundamental model of resource allocation in which a finite number of resources must be assigned in an online manner to a heterogeneous stream of customers. The customers arrive randomly over time according to known stochastic processes. Each customer requires a specific amount of capacity and has a specific preference for each of the resources with some resources being feasible for the customer and some not. The system must find a feasible assignment of each customer to a resource or must reject the customer. The aim is to maximize the total expected capacity utilization of the resources over the horizon. This model has application in services, freight transportation, and online advertising. We present online algorithms with bounded competitive ratios relative to an optimal off-line algorithm that knows all stochastic information. Our algorithms perform extremely well compared with common heuristics as demonstrated on a real data set from a large hospital system in New York City. This paper was accepted by Yinyu Ye, optimization.


1990 ◽  
Vol 330 (1257) ◽  
pp. 235-251 ◽  

Over the years, there has been much discussion about the relative importance of environmental and biological factors in regulating natural populations. Often it is thought that environmental factors are associated with stochastic fluctuations in population density, and biological ones with deterministic regulation. We revisit these ideas in the light of recent work on chaos and nonlinear systems. We show that completely deterministic regulatory factors can lead to apparently random fluctuations in population density, and we then develop a new method (that can be applied to limited data sets) to make practical distinctions between apparently noisy dynamics produced by low-dimensional chaos and population variation that in fact derives from random (high-dimensional)noise, such as environmental stochasticity or sampling error. To show its practical use, the method is first applied to models where the dynamics are known. We then apply the method to several sets of real data, including newly analysed data on the incidence of measles in the United Kingdom. Here the additional problems of secular trends and spatial effects are explored. In particular, we find that on a city-by-city scale measles exhibits low-dimensional chaos (as has previously been found for measles in New York City), whereas on a larger, country-wide scale the dynamics appear as a noisy two-year cycle. In addition to shedding light on the basic dynamics of some nonlinear biological systems, this work dramatizes how the scale on which data is collected and analysed can affect the conclusions drawn.


2020 ◽  
Author(s):  
Christoph Käding ◽  
Jakob Runge

<p>Unveiling causal structures, i.e., distinguishing cause from effect, from observational data plays a key role in climate science as well as in other fields like medicine or economics. Hence, a number of approaches has been developed to approach this. Recent decades have seen methods like Granger causality or causal network learning algorithms, which are, however, not generally applicable in every scenario. When given two variables X and Y, it is still a challenging problem to decide whether X causes Y, or Y causes X. Recently, there has been progress in the framework of structural causal models, which enable the discovery of causal relationships by making use of functional dependencies (e.g., only linear) and noise models (e.g., only non-Gaussian noise). However, each of them is coming with its own requirements and constraints. While the corresponding conditions are usually unknown in real scenarios, it is quite hard to choose the right method for every application in general.</p><p>The goal of this work is to evaluate and to compare a number of state-of-the-art techniques in a joint benchmark. To do so, we employ synthetic data, where we can control for the dataset conditions precisely, and hence can give detailed reasoning about the resulting performance of the individual methods given their underlying assumptions. Further, we utilize real-world data to shed light on their capabilities in actual applications in a comparative manner. We concentrate on the case considering two uni-variate variables due to the large number of possible application scenarios. A profound study, comparing even the latest developments, is, to the best of our knowledge, so far not available in the literature.</p>


2021 ◽  
Author(s):  
Juan Ruiz ◽  
Guo-Yuan Lien ◽  
Keiichi Kondo ◽  
Shigenori Otsuka ◽  
Takemasa Miyoshi

Abstract. Non-Gaussian forecast error is a challenge for ensemble-based data assimilation (DA), particularly for more nonlinear convective dynamics. In this study, we investigate the degree of non-Gaussianity of forecast error distributions at 1-km resolution using a 1000-member ensemble Kalman filter, and how it is affected by the DA update frequency and observation number. Regional numerical weather prediction experiments are performed with the SCALE (Scalable Computing for Advanced Library and Environment) model and the LETKF (Local Ensemble Transform Kalman Filter) assimilating every-30-second phased array radar observations. The results show that non-Gaussianity develops rapidly within convective clouds and is sensitive to the DA frequency and the number of assimilated observations. The non-Gaussianity is reduced by up to 40 % when the assimilation window is shortened from 5 minutes to 30 seconds, particularly for vertical velocity and radar reflectivity.


Sign in / Sign up

Export Citation Format

Share Document