Changepoint Analysis of Binary and Ordinal Probit Models: An Application to Bank Rate Policy Under the Interwar Gold Standard

2011 ◽  
Vol 19 (2) ◽  
pp. 188-204 ◽  
Author(s):  
Jong Hee Park

In this paper, I introduce changepoint models for binary and ordered time series data based on Chib's hidden Markov model. The extension of the changepoint model to a binary probit model is straightforward in a Bayesian setting. However, detecting parameter breaks from ordered regression models is difficult because ordered time series data often have clustering along the break points. To address this issue, I propose an estimation method that uses the linear regression likelihood function for the sampling of hidden states of the ordinal probit changepoint model. The marginal likelihood method is used to detect the number of hidden regimes. I evaluate the performance of the introduced methods using simulated data and apply the ordinal probit changepoint model to the study of Eichengreen, Watson, and Grossman on violations of the “rules of the game” of the gold standard by the Bank of England during the interwar period.

2018 ◽  
Vol 7 (2) ◽  
pp. 139-150 ◽  
Author(s):  
Adekunlé Akim Salami ◽  
Ayité Sénah Akoda Ajavon ◽  
Mawugno Koffi Kodjo ◽  
Seydou Ouedraogo ◽  
Koffi-Sa Bédja

In this article, we introduced a new approach based on graphical method (GPM), maximum likelihood method (MLM), energy pattern factor method (EPFM), empirical method of Justus (EMJ), empirical method of Lysen (EML) and moment method (MOM) using the even or odd classes of wind speed series distribution histogram with 1 m/s as bin size to estimate the Weibull parameters. This new approach is compared on the basis of the resulting mean wind speed and its standard deviation using seven reliable statistical indicators (RPE, RMSE, MAPE, MABE, R2, RRMSE and IA). The results indicate that this new approach is adequate to estimate Weibull parameters and can outperform GPM, MLM, EPF, EMJ, EML and MOM which uses all wind speed time series data collected for one period. The study has also found a linear relationship between the Weibull parameters K and C estimated by MLM, EPFM, EMJ, EML and MOM using odd or even class wind speed time series and those obtained by applying these methods to all class (both even and odd bins) wind speed time series. Another interesting feature of this approach is the data size reduction which eventually leads to a reduced processing time.Article History: Received February 16th 2018; Received in revised form May 5th 2018; Accepted May 27th 2018; Available onlineHow to Cite This Article: Salami, A.A., Ajavon, A.S.A., Kodjo, M.K. , Ouedraogo, S. and Bédja, K. (2018) The Use of Odd and Even Class Wind Speed Time Series of Distribution Histogram to Estimate Weibull Parameters. Int. Journal of Renewable Energy Development 7(2), 139-150.https://doi.org/10.14710/ijred.7.2.139-150


2019 ◽  
pp. 019251211988473
Author(s):  
Seung-Whan Choi ◽  
Henry Noll

In this study, we argue that ethnic inclusiveness is an important democratic norm that fosters interstate peace. When two states are socialized into the notion of ethnic tolerance, they acquire the ability to reach cooperative arrangements in time of crisis. Based on cross-national time-series data analysis covering the period 1950–2001, we illustrate how two states that are inclusive of their politically relevant ethnic groups are less likely to experience interstate disputes than states that remain exclusive. This finding was robust, regardless of sample size, intensity of the dispute, model specification, or estimation method. Therefore, we believe in the existence of ethnic peace: ethnic inclusiveness represents an unambiguous force for democratic peace.


Author(s):  
Mihai Dupac ◽  
Dan B. Marghitu ◽  
David G. Beale

Abstract In this paper, a nonlinear dynamics analysis of the simulated data was considered to study the time evolution of an electro-magnetically levitated flexible droplet. The main goals of this work are to study the behavior of the levitated droplet and to investigate its stability. Quantities characterizing time series data such as attractor dimension or largest Lyapunov exponent were computed.


Stats ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 55-69 ◽  
Author(s):  
Gen Sakoda ◽  
Hideki Takayasu ◽  
Misako Takayasu

We propose a parameter estimation method for non-stationary Poisson time series with the abnormal fluctuation scaling, known as Taylor’s law. By introducing the effect of Taylor’s fluctuation scaling into the State Space Model with the Particle Filter, the underlying Poisson parameter’s time evolution is estimated correctly from given non-stationary time series data with abnormally large fluctuations. We also developed a discontinuity detection method which enables tracking the Poisson parameter even for time series including sudden discontinuous jumps. As an example of application of this new general method, we analyzed Point-of-Sales data in convenience stores to estimate change of probability of purchase of commodities under fluctuating number of potential customers. The effectiveness of our method for Poisson time series with non-stationarity, large discontinuities and Taylor’s fluctuation scaling is verified by artificial and actual time series.


2021 ◽  
Author(s):  
Smita Deb ◽  
Sahil Sidheekh ◽  
Christopher F. Clements ◽  
Narayanan C. Krishnan ◽  
Partha S. Dutta

Abstract1. Sudden transitions from one stable state to a contrasting state occur in complex systems ranging from the collapse of ecological populations to climatic change, with much recent work seeking to develop methods to predict these unexpected transitions from signals in time series data. However, previously developed methods vary widely in their reliability, and fail to classify whether an approaching collapse might be catastrophic (and hard to reverse) or non-catastrophic (easier to reverse) with significant implications for how such systems are managed.2. Here we develop a novel detection method, using simulated outcomes from a range of simple mathematical models with varying nonlinearity to train a deep neural network to detect critical transitions - the Early Warning Signal Network (EWSNet).3. We demonstrate that this neural network (EWSNet), trained on simulated data with minimal assumptions about the underlying structure of the system, can predict with high reliability observed real-world transitions in ecological and climatological data. Importantly, our model appears to capture latent properties in time series missed by previous warning signals approaches, allowing us to not only detect if a transition is approaching but critically whether the collapse will be catastrophic or non-catastrophic.4. The EWSNet can flag a critical transition with unprecedented accuracy, overcoming some of the major limitations of traditional methods based on phenomena such as Critical Slowing Down. These novel properties mean EWSNet has the potential to serve as a universal indicator of transitions across a broad spectrum of complex systems, without requiring information on the structure of the system being monitored. Our work highlights the practicality of deep learning for addressing further questions pertaining to ecosystem collapse and have much broader management implications.


2016 ◽  
Vol 12 (3) ◽  
pp. 292-311 ◽  
Author(s):  
Shuhei Yamamoto ◽  
Kei Wakabayashi ◽  
Noriko Kando ◽  
Tetsuji Satoh

Purpose Many Twitter users post tweets that are related to their particular interests. Users can also collect information by following other users. One approach clarifies user interests by tagging labels based on the users. A user tagging method is important to discover candidate users with similar interests. This paper aims to propose a new user tagging method using the posting time series data of the number of tweets. Design/methodology/approach Our hypothesis focuses on the relationship between a user’s interests and the posting times of tweets: as users have interests, they will post more tweets at the time when events occur compared with general times. The authors assume that hashtags are labeled tags to users and observe their occurrence counts in each timestamp. The authors extract burst timestamps using Kleinberg’s burst enumeration algorithm and estimate the burst levels. The authors manage the burst levels as term frequency in documents and calculate the score using typical methods such as cosine similarity, Naïve Bayes and term frequency (TF) in a document and inversed document frequency (IDF; TF-IDF). Findings From the sophisticated experimental evaluations, the authors demonstrate the high efficiency of the tagging method. Naïve Bayes and cosine similarity are particular suitable for the user tagging and tag score calculation tasks, respectively. Some users, whose hashtags were appropriately estimated by our methods, experienced higher the maximum value of the number of tweets than other users. Originality/value Many approaches estimate user interest based on the terms in tweets and apply such graph theory as following networks. The authors propose a new estimation method that uses the time series data of the number of tweets. The merits to estimating user interest using the time series data do not depend on language and can decrease the calculation costs compared with the above-mentioned approaches because the number of features is fewer.


Author(s):  
Hamda B. Ajmal ◽  
Michael G. Madden

AbstractOver a decade ago, Lèbre (2009) proposed an inference method, G1DBN, to learn the structure of gene regulatory networks (GRNs) from high dimensional, sparse time-series gene expression data. Their approach is based on concept of low-order conditional independence graphs that they extend to dynamic Bayesian networks (DBNs). They present results to demonstrate that their method yields better structural accuracy compared to the related Lasso and Shrinkage methods, particularly where the data is sparse, that is, the number of time measurements n is much smaller than the number of genes p. This paper challenges these claims using a careful experimental analysis, to show that the GRNs reverse engineered from time-series data using the G1DBN approach are less accurate than claimed by Lèbre (2009). We also show that the Lasso method yields higher structural accuracy for graphs learned from the simulated data, compared to the G1DBN method, particularly when the data is sparse ($n{< }{< }p$). The Lasso method is also better than G1DBN at identifying the transcription factors (TFs) involved in the cell cycle of Saccharomyces cerevisiae.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Taylor Chomiak ◽  
Neilen P. Rasiah ◽  
Leonardo A. Molina ◽  
Bin Hu ◽  
Jaideep S. Bains ◽  
...  

AbstractHere we introduce Local Topological Recurrence Analysis (LoTRA), a simple computational approach for analyzing time-series data. Its versatility is elucidated using simulated data, Parkinsonian gait, and in vivo brain dynamics. We also show that this algorithm can be used to build a remarkably simple machine-learning model capable of outperforming deep-learning models in detecting Parkinson’s disease from a single digital handwriting test.


Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 341
Author(s):  
Hyuk-Rok Kwon ◽  
Pan-Koo Kim

With the expansion of advanced metering infrastructure (AMI) installations, various additional services using AMI data have emerged. However, some data is lost in the communication process of data collection. Hence, to address this challenge, the estimation of the missing data is required. To estimate the missing values in the time-series data generated from smart meters, we investigated four methods, ranging from a conventional method to an estimation method applying long short-term memory (LSTM), which exhibits excellent performance in the time-series field, and provided the performance comparison data. Furthermore, because power usages represent estimates of data that are missing some values in the middle, rather than regular time-series estimation data, the simple estimation may lead to an error where the estimated accumulated power usage in the missing data is larger than the real accumulated power usage appearing in the data after the end of the missing data interval. Therefore, this study proposes a hybrid method that combines the advantages of the linear interpolation method and the LSTM estimation-based compensation method, rather than those of conventional methods adopted in the time-series field. The performance of the proposed method is more stable and better than that of other methods.


Sign in / Sign up

Export Citation Format

Share Document