scholarly journals Optimal Neighborhood Selection for AR-ARCH Random Fields with Application to Mortality

Stats ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 26-51
Author(s):  
Paul Doukhan ◽  
Joseph Rynkiewicz ◽  
Yahia Salhi

This article proposes an optimal and robust methodology for model selection. The model of interest is a parsimonious alternative framework for modeling the stochastic dynamics of mortality improvement rates introduced recently in the literature. The approach models mortality improvements using a random field specification with a given causal structure instead of the commonly used factor-based decomposition framework. It captures some well-documented stylized facts of mortality behavior including: dependencies among adjacent cohorts, the cohort effects, cross-generation correlations, and the conditional heteroskedasticity of mortality. Such a class of models is a generalization of the now widely used AR-ARCH models for univariate processes. A the framework is general, it was investigated and illustrated a simple variant called the three-level memory model. However, it is not clear which is the best parameterization to use for specific mortality uses. In this paper, we investigate the optimal model choice and parameter selection among potential and candidate models. More formally, we propose a methodology well-suited to such a random field able to select thebest model in the sense that the model is not only correct but also most economical among all thecorrectmodels. Formally, we show that a criterion based on a penalization of the log-likelihood, e.g., the using of the Bayesian Information Criterion, is consistent. Finally, we investigate the methodology based on Monte-Carlo experiments as well as real-world datasets.

2019 ◽  
Vol 73 (10) ◽  
pp. 971-974 ◽  
Author(s):  
Lynda Fenton ◽  
Grant MA Wyper ◽  
Gerry McCartney ◽  
Jon Minton

BackgroundGains in life expectancies have stalled in Scotland, as in several other countries, since around 2012. The relationship between stalling mortality improvements and socioeconomic inequalities in health is unclear.MethodsWe calculate the difference, as percentage change, in all-cause, all-age, age-standardised mortality rates (ASMR) between 2006 and 2011 (period 1) and between 2012 and 2017 (period 2), for Scotland overall, by sex, and by Scottish Index of Multiple Deprivation (SIMD) quintile. Linear regression is used to summarise the relationship between SIMD quintile and mortality rate change in each period.ResultsBetween 2006 and 2011, the overall ASMR fell by 10.6% (138/100 000), by 10.1% in women, and 11.8% in men, but between 2012 and 2017 the overall ASMR fell by only 2.6% (30/100 000), by 3.5% in women, and by 2.0% in men. Within the most deprived quintile, the overall ASMR fell by 8.6% (143/100 000) from 2006 to 2011 (7.2% in women; 9.8% in men), but rose by 1.5% (21/100 000) from 2012 to 2017 (0.7% in women; 2.1% in men).The socioeconomic gradient in ASMR improvement more than quadrupled, from 0.4% per quintile in period 1, to 1.7% per quintile in period 2.ConclusionFrom 2012 to 2017, socioeconomic gradients in mortality improvement in Scotland were markedly steeper than over the preceding 6 years. As a result, there has not only been a slowdown in overall reductions in mortality, but a widening of socioeconomic mortality inequalities.


Molecules ◽  
2018 ◽  
Vol 23 (7) ◽  
pp. 1729
Author(s):  
Yinghan Hong ◽  
Zhifeng Hao ◽  
Guizhen Mai ◽  
Han Huang ◽  
Arun Kumar Sangaiah

Exploring and detecting the causal relations among variables have shown huge practical values in recent years, with numerous opportunities for scientific discovery, and have been commonly seen as the core of data science. Among all possible causal discovery methods, causal discovery based on a constraint approach could recover the causal structures from passive observational data in general cases, and had shown extensive prospects in numerous real world applications. However, when the graph was sufficiently large, it did not work well. To alleviate this problem, an improved causal structure learning algorithm named brain storm optimization (BSO), is presented in this paper, combining K2 with brain storm optimization (K2-BSO). Here BSO is used to search optimal topological order of nodes instead of graph space. This paper assumes that dataset is generated by conforming to a causal diagram in which each variable is generated from its parent based on a causal mechanism. We designed an elaborate distance function for clustering step in BSO according to the mechanism of K2. The graph space therefore was reduced to a smaller topological order space and the order space can be further reduced by an efficient clustering method. The experimental results on various real-world datasets showed our methods outperformed the traditional search and score methods and the state-of-the-art genetic algorithm-based methods.


1997 ◽  
Vol 9 (8) ◽  
pp. 1627-1660 ◽  
Author(s):  
Song Chun Zhu ◽  
Ying Nian Wu ◽  
David Mumford

This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a given set of observed feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce them. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed; in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The minimax entropy principle is then corrected by considering the sample variation in the observed feature statistics, and an information criterion for feature pursuit is derived. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (filter, random field, and minimax entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. The relationship between our theory and the mechanisms of neural computation is also discussed.


Author(s):  
Yusuke Iwasawa ◽  
Kei Akuzawa ◽  
Yutaka Matsuo

Adversarial invariance induction (AII) is a generic and powerful framework for enforcing an invariance to nuisance attributes into neural network representations. However, its optimization is often unstable and little is known about its practical behavior. This paper presents an analysis of the reasons for the optimization difficulties and provides a better optimization procedure by rethinking AII from a divergence minimization perspective. Interestingly, this perspective indicates a cause of the optimization difficulties: it does not ensure proper divergence minimization, which is a requirement of the invariant representations. We then propose a simple variant of AII, called invariance induction by discriminator matching, which takes into account the divergence minimization interpretation of the invariant representations. Our method consistently achieves near-optimal invariance in toy datasets with various configurations in which the original AII is catastrophically unstable. Extentive experiments on four real-world datasets also support the superior performance of the proposed method, leading to improved user anonymization and domain generalization.


2020 ◽  
Author(s):  
Pedro V. B. Jeronymo ◽  
Carlos D. Maciel

Faster feature selection algorithms become a necessity as Big Data dictates the zeitgeist. An important class of feature selectors are Markov Blanket (MB) learning algorithms. They are Causal Discovery algorithms that learn the local causal structure of a target variable. A common assumption in their theoretical basis, yet often violated in practice, is causal sufficiency: the requirement that all common causes of the measured variables in the dataset are also in the dataset. Recently, Yu et al. (2018) proposed the M3B algorithm, the first to directly learn the MB without demanding causal sufficiency. The main drawback of M3B is that it is time inefficient, being intractable for high-dimensional inputs. In this paper, we derive the Fast Markov Blanket Discovery Algorithm (FMMB). Empirical results that compare FMMB to M3B on the structural learning task show that FMMB outperforms M3B in terms of time efficiency while preserving structural accuracy. Five real-world datasets where used to contrast both algorithms as feature selectors. Applying NB and SVM classifiers, FMMB achieved a competitive outcome. This method mitigates the curse of dimensionality and inspires the development of local-toglobal algorithms.


2011 ◽  
Vol 07 (02) ◽  
pp. 347-361 ◽  
Author(s):  
MARINHO G. ANDRADE ◽  
SANDRA C. OLIVEIRA

The purpose of this study is to address the inference problem of the parameters of autoregressive conditional heteroscedasticity (ARCH) models. Specifically, we present a comparison of the two approaches — Bayesian and Maximum Likelihood (ML) for ARCH models, and the specific mathematical and algorithmic formulations of these approaches. In the ML, estimation we obtain confidence intervals by using the Bootstrap simulation technique. In the Bayesian estimation, we present a reparametrization of the model which allows us to apply prior normal densities to the transformed parameters. The posterior estimates are obtained using Monte Carlo Markov Chain (MCMC) methods. The methodology is exemplified by considering two Brazilian financial time series: the Bovespa Stock Index — IBovespa and the Telebrás series. The order of each ARCH model is selected by using the Bayesian Information Criterion (BIC).


2019 ◽  
Author(s):  
Lynda Fenton ◽  
Grant Wyper ◽  
Gerry McCartney ◽  
Jon Minton

Structured AbstractBackgroundGains in life expectancies have stalled in Scotland, as in several other countries, since around 2012. The relationship between stalling mortality improvements and socioeconomic inequalities in health is unclear.MethodsWe calculate the percentage improvement in age-standardised mortality rates (ASMR) in Scotland overall, by sex, and by Scottish Index of Multiple Deprivation (SIMD) quintile and gender, for two periods: 2006-2011 and 2012-2017. We then calculate the socioeconomic gradient in improvements for both periods.ResultsBetween 2006 and 2011, ASMRs fell by 10.6% (10.1% in females; 11.8% in males), but between 2012 and 2017 ASMRs only fell by 2.6% (3.5% in females; 2.0% in males). The socioeconomic gradient in ASMR improvement more than quadrupled, from 0.4% per quintile in 2006-2011 (0.7% in females; 0.6% in males) to 1.7% (2.0% in females; 1.4% in males). Within the most deprived quintile, ASMRs fell in the 2006-2011 period (8.6% overall; 7.2% in females; 9.8% in males), but rose in the 2012-2017 period (by 1.5% overall; 0.7% in females; 2.1% in males).ConclusionAs mortality improvements in Scotland stalled in 2012-2017, socioeconomic gradients in mortality became steeper, with increased mortality rates over this period in the most socioeconomically deprived fifth of the population.What we already knowImprovements in mortality rates slowed markedly around 2012 in Scotland and a number of other high-income countries.Scotland has large socioeconomic health inequalities, and the absolute gap in premature mortality between most and least deprived has increased since 2013.The relationship between stalling mortality improvements and socioeconomic inequalities in health is unclear.What this study addsStalling in mortality improvement has occurred across the whole population of Scotland, but is most acute in the most socioeconomically deprived areas.Mortality improvements went into reverse (i.e. deteriorated) in the most deprived fifth of areas between 2012 and 2017.Research to further characterise and explain recent aggregate trends should incorporate consideration of the importance of socioeconomic inequalities within proposed explanations.


2018 ◽  
Vol 22 (5) ◽  
Author(s):  
Thomas Chuffart ◽  
Emmanuel Flachaire ◽  
Anne Péguin-Feissolle

Abstract In this article, a misspecification test in conditional volatility and GARCH-type models is presented. We propose a Lagrange Multiplier type test based on a Taylor expansion to distinguish between (G)ARCH models and unknown GARCH-type models. This new test can be seen as a general misspecification test of a large set of GARCH-type univariate models. It focuses on the short-term component of the volatility. We investigate the size and the power of this test through Monte Carlo experiments and we compare it to two other standard Lagrange Multiplier tests, which are more restrictive. We show the usefulness of our test with an illustrative empirical example based on daily exchange rate returns.


Author(s):  
Fei Yi ◽  
Zhiwen Yu ◽  
Fuzhen Zhuang ◽  
Bin Guo

Crime prediction has always been a crucial issue for public safety, and recent works have shown the effectiveness of taking spatial correlation, such as region similarity or interaction, for fine-grained crime modeling. In our work, we seek to reveal the relationship across regions for crime prediction using Continuous Conditional Random Field (CCRF). However, conventional CCRF would become impractical when facing a dense graph considering all relationship between regions. To deal with it, in this paper, we propose a Neural Network based CCRF (NN-CCRF) model that formulates CCRF into an end-to-end neural network framework, which could reduce the complexity in model training and improve the overall performance. We integrate CCRF with NN by introducing a Long Short-Term Memory (LSTM) component to learn the non-linear mapping from inputs to outputs of each region, and a modified Stacked Denoising AutoEncoder (SDAE) component for pairwise interactions modeling between regions. Experiments conducted on two different real-world datasets demonstrate the superiority of our proposed model over the state-of-the-art methods.


Author(s):  
Wei Wenjuan ◽  
Feng Lu ◽  
Liu Chunchen

Prescriptive pricing is one of the most advanced pricing techniques, which derives the optimal price strategy to maximize the future profit/revenue by carrying out a two-stage process, demand modeling and price optimization.Demand modeling tries to reveal price-demand laws by discovering causal relationships among demands, prices, and objective factors, which is the foundation of price optimization.Existing methods either use regression or causal learning for uncovering the price-demand relations, but suffer from pain points in either accuracy/efficiency or mixed data type processing, while all of these are actual requirements in practical pricing scenarios.This paper proposes a novel demand modeling technique for practical usage.Speaking concretely, we propose a new locally consistent information criterion named MIC,and derive MIC-based inference algorithms for an accurate recovery of causal structure on mixed factor space.Experiments on simulate/real datasets show the superiority of our new approach in both price-demand law recovery and demand forecasting, as well as show promising performance in supporting optimal pricing.


Sign in / Sign up

Export Citation Format

Share Document