On the Consistent Path Problem

2020 ◽  
Vol 68 (6) ◽  
pp. 1913-1931
Author(s):  
Leonardo Lozano ◽  
David Bergman ◽  
J. Cole Smith

This paper studies a novel decomposition scheme, utilizing decision diagrams for modeling elements of a problem where typical linear relaxations fail to provide sufficiently tight bounds. Given a collection of decision diagrams, each representing a portion of the problem, together with linear inequalities modeling other portions of the problem, how can one efficiently optimize over such a representation? In this paper, we model the problem as a consistent path problem, where a path in each diagram has to be identified, all of which agree on the value assignments to variables. We establish complexity results and propose a branch-and-cut framework for solving the decomposition. Through application to binary cubic optimization and a variant of the market split problem, we show that the decomposition approach provides significant improvement gains over standard linear models.

2007 ◽  
Vol 8 (5) ◽  
pp. 449-464 ◽  
Author(s):  
C. H. Son ◽  
T. A. Shethaji ◽  
C. J. Rutland ◽  
H Barths ◽  
A Lippert ◽  
...  

Three non-linear k-ε models were implemented into the multi-dimensional computational fluid dynamics code GMTEC with the purpose of comparing them with existing linear k-ε models including renormalization group variations. The primary focus of the present study is to evaluate the potential of these non-linear models in engineering applications such as the internal combustion engine. The square duct flow and the backwards-facing step flow were two simple test cases chosen for which experimental data are available for comparison. Successful simulations for these cases were followed by simulations of an engine-type intake flow to evaluate the performance of the non-linear models in comparison with experimental data and the standard linear k-ε models as well as two renormalization group types. All the non-linear models are found to be an improvement over the standard linear model, but mostly in simple flows. For more complex flows, such as the engine-type case, only the cubic non-linear models appear to make a modest improvement in the mean flow but without any improvement in the root-mean-square values. These improvements are overshadowed by the stiffness of the cubic models and the requirements for smaller time steps. The contributions of each non-linear term to the Reynolds stress tensor are analysed in detail in order to identify the different characteristics of the different non-linear models for engine intake flows.


2013 ◽  
Vol 22 (10) ◽  
pp. 1340025
Author(s):  
TENG WANG ◽  
LEI ZHAO ◽  
ZI-YI HU ◽  
ZHENG XIE ◽  
XIN-AN WANG

In this paper, a novel decomposition approach and VLSI implementation of the chroma interpolator with great hardware reuse and no multipliers for H.264 encoders are proposed. First, the characteristic of the chroma interpolation is analyzed to obtain an optimized decomposition scheme, with which the chroma interpolation can be realized with arithmetic elements (AEs) which are comprised of only adders. Four types of AEs are developed and a pipelining hardware design is proposed to conduct the chroma interpolation with great hardware reuse. The proposed design was prototyped within a Xilinx Virtex6 XC6VLX240T FPGA with a clock frequency as high as 245 MHz. The proposed design was also synthesized with SMIC 130 nm CMOS technology with a clock frequency of 200 MHz, which could support a real-time HDTV application with less hardware cost and lower power consumption.


Author(s):  
Hao Zhang ◽  
Shuigeng Zhou ◽  
Chuanxu Yan ◽  
Jihong Guan ◽  
Xin Wang

This paper addresses two important issues in causality inference. One is how to reduce redundant conditional independence (CI) tests, which heavily impact the efficiency and accuracy of existing constraint-based methods. Another is how to construct the true causal graph from a set of Markov equivalence classes returned by these methods.For the first issue, we design a recursive decomposition approach where the original data (a set of variables) is first decomposed into three small subsets, each of which is then recursively decomposed into three smaller subsets until none of subsets can be decomposed further. Consequently, redundant CI tests can be reduced by inferring causality from these subsets. Advantage of this decomposition scheme lies in two aspects: 1) it requires only low-order CI tests, and 2) it does not violate d-separation. Thus, the complete causality can be reconstructed by merging all the partial results of the subsets.For the second issue, we employ regression-based conditional independence test to check CIs in linear non-Gaussian additive noise cases, which can identify more causal directions by x−E(x|Z)⊥z (or y−E(y|Z)⊥z). Therefore, causal direction learning is no longer limited by the number of returned Vstructures and the consistent propagation.Extensive experiments show that the proposed method can not only substantially reduce redundant CI tests but also effectively distinguish the equivalence classes, thus is superior to the state of the art constraint-based methods in causality inference.


1998 ◽  
Vol 14 (6) ◽  
pp. 701-743 ◽  
Author(s):  
Frank Kleibergen ◽  
Herman K. van Dijk

Diffuse priors lead to pathological posterior behavior when used in Bayesian analyses of simultaneous equation models (SEM's). This results from the local nonidentification of certain parameters in SEM's. When this a priori known feature is not captured appropriately, it results in an a posteriori favoring of certain specific parameter values that is not the consequence of strong data information but of local nonidentification. We show that a proper consistent Bayesian analysis of a SEM explicitly has to consider the reduced form of the SEM as a standard linear model on which nonlinear (reduced rank) restrictions are imposed, which result from a singular value decomposition. The priors/posteriors of the parameters of the SEM are therefore proportional to the priors/posteriors of the parameters of the linear model under the condition that the restrictions hold. This leads to a framework for constructing priors and posteriors for the parameters of SEM's. The framework is used to construct priors and posteriors for one, two, and three structural equation SEM's. These examples together with a theorem, showing that the reduced forms of SEM's accord with sets of reduced rank restrictions on standard linear models, show how Bayesian analyses of generally specified SEM's can be conducted.


Author(s):  
Félix Balazard

Genome-wide association studies (GWAS) have uncovered thousands of associations between genetic variants and diseases. Using the same datasets, prediction of disease risk can be attempted. Phase information is an important biological structure that has seldom been used in that setting. We propose here a multi-step machine learning method that aims at using this information. Our method captures local interactions in short haplotypes and combines the results linearly. We show that it outperforms standard linear models on some GWAS datasets. However, a variation of our method that does not use phase information obtains similar performance. Regarding the missing heritability problem, we remark that interactions in short haplotypes contribute to additive heritability. Source code is available on github at https://github.com/FelBalazard/Prediction-with-Haplotypes.


Author(s):  
Mahdi Ahmadi ◽  
Mohammad Haeri

In this paper, in order to control a nonlinear dynamic system via multi-model controller, we propose a systematic approach to determine the nominal local linear models. These models are selected from the local models bank and results in a reduced nominal models set that provides enough information to design a multi-model controller. To determine the initial local models bank, gap metric is used so that the distance between two successive local models is smaller than a threshold value. Then, a systematic approach that aims to get a reduced nominal models bank is developed. Based on this approach, first, a binary gap matrix is defined by combining gap metric and stability information. Then, several rows of this matrix are selected such that the sum of them becomes a non-zero vector. The proposed approach along with a designed robust controller is validated on a pH neutralization regarding to its highly nonlinear behavior.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 923-923
Author(s):  
Spencer Farrell ◽  
Arnold Mitnitski ◽  
Kenneth Rockwood ◽  
Andrew Rutenberg

Abstract We have built a computational model of individual aging trajectories of health and survival, containing physical, functional, and biological variables, conditioned on demographic, lifestyle, and medical background information. We combine techniques of modern machine learning with a network approach, where the health variables are coupled by an interaction network within a stochastic dynamical system. The resulting model is scalable to large longitudinal data sets, is predictive of individual high-dimensional health trajectories and survival, and infers an interpretable network of interactions between the health variables. The interaction network gives us the ability to identify which interactions between variables are used by the model, demonstrating that realistic physiological connections are inferred. We use English Longitudinal Study of Aging (ELSA) data to train our model and show that it performs better than standard linear models for health outcomes and survival, while also revealing the relevant interactions. Our model can be used to generate synthetic individuals that age realistically from input data at baseline, as well as the ability to probe future aging outcomes given an arbitrary initial health state.


Author(s):  
Ali Zribi ◽  
Mohamed Chtourou ◽  
Mohamed Djemal

This paper proposes a novel gap metric based fuzzy decomposition approach resulting in a reduced model bank that provides enough information to design controllers. It requires, first, the determination of the model base. For this, the number of initial models is obtained via fuzzy c-means (FCM) algorithm. Then, a gap metric based method which aims to get a reduced model bank is developed. Based on the linear models bank, a set of linear controllers are designed and combined into a global controller for setpoint tracking control.


Sign in / Sign up

Export Citation Format

Share Document