scholarly journals Solving #SAT and Bayesian Inference with Backtracking Search

2009 ◽  
Vol 34 ◽  
pp. 391-442 ◽  
Author(s):  
F. Bacchus ◽  
S. Dalmao ◽  
T. Pitassi

Inference in Bayes Nets (BAYES) is an important problem with numerous applications in probabilistic reasoning. Counting the number of satisfying assignments of a propositional formula (#SAT) is a closely related problem of fundamental theoretical importance. Both these problems, and others, are members of the class of sum-of-products (SUMPROD) problems. In this paper we show that standard backtracking search when augmented with a simple memoization scheme (caching) can solve any sum-of-products problem with time complexity that is at least as good any other state-of-the-art exact algorithm, and that it can also achieve the best known time-space tradeoff. Furthermore, backtracking’s ability to utilize more flexible variable orderings allows us to prove that it can achieve an exponential speedup over other standard algorithms for SUMPROD on some instances. The ideas presented here have been utilized in a number of solvers that have been applied to various types of sum-of-product problems. These system’s have exploited the fact that backtracking can naturally exploit more of the problem’s structure to achieve improved performance on a range of probleminstances. Empirical evidence of this performance gain has appeared in published works describing these solvers, and we provide references to these works.

2018 ◽  
Author(s):  
John-William Sidhom ◽  
Drew Pardoll ◽  
Alexander Baras

AbstractMotivationThe immune system has potential to present a wide variety of peptides to itself as a means of surveillance for pathogenic invaders. This means of surveillances allows the immune system to detect peptides derives from bacterial, viral, and even oncologic sources. However, given the breadth of the epitope repertoire, in order to study immune responses to these epitopes, investigators have relied on in-silico prediction algorithms to help narrow down the list of candidate epitopes, and current methods still have much in the way of improvement.ResultsWe present Allele-Integrated MHC (AI-MHC), a deep learning architecture with improved performance over the current state-of-the-art algorithms in human Class I and Class II MHC binding prediction. Our architecture utilizes a convolutional neural network that improves prediction accuracy by 1) allowing one neural network to be trained on all peptides for all alleles of a given class of MHC molecules by making the allele an input to the net and 2) introducing a global max pooling operation with an optimized kernel size that allows the architecture to achieve translational invariance in MHC-peptide binding analysis, making it suitable for sequence analytics where a frame of interest needs to be learned in a longer, variable length sequence. We assess AI-MHC against internal independent test sets and compare against all algorithms in the IEDB automated server benchmarks, demonstrating our algorithm achieves state-of-the-art for both Class I and Class II prediction.Availability and ImplementationAI-MHC can be used via web interface at baras.pathology.jhu.edu/[email protected]


2017 ◽  
Vol 108 (1) ◽  
pp. 307-318 ◽  
Author(s):  
Eleftherios Avramidis

AbstractA deeper analysis on Comparative Quality Estimation is presented by extending the state-of-the-art methods with adequacy and grammatical features from other Quality Estimation tasks. The previously used linear method, unable to cope with the augmented features, is replaced with a boosting classifier assisted by feature selection. The methods indicated show improved performance for 6 language pairs, when applied on the output from MT systems developed over 7 years. The improved models compete better with reference-aware metrics.Notable conclusions are reached through the examination of the contribution of the features in the models, whereas it is possible to identify common MT errors that are captured by the features. Many grammatical/fluency features have a good contribution, few adequacy features have some contribution, whereas source complexity features are of no use. The importance of many fluency and adequacy features is language-specific.


2022 ◽  
Vol 40 (2) ◽  
pp. 1-24
Author(s):  
Franco Maria Nardini ◽  
Roberto Trani ◽  
Rossano Venturini

Modern search services often provide multiple options to rank the search results, e.g., sort “by relevance”, “by price” or “by discount” in e-commerce. While the traditional rank by relevance effectively places the relevant results in the top positions of the results list, the rank by attribute could place many marginally relevant results in the head of the results list leading to poor user experience. In the past, this issue has been addressed by investigating the relevance-aware filtering problem, which asks to select the subset of results maximizing the relevance of the attribute-sorted list. Recently, an exact algorithm has been proposed to solve this problem optimally. However, the high computational cost of the algorithm makes it impractical for the Web search scenario, which is characterized by huge lists of results and strict time constraints. For this reason, the problem is often solved using efficient yet inaccurate heuristic algorithms. In this article, we first prove the performance bounds of the existing heuristics. We then propose two efficient and effective algorithms to solve the relevance-aware filtering problem. First, we propose OPT-Filtering, a novel exact algorithm that is faster than the existing state-of-the-art optimal algorithm. Second, we propose an approximate and even more efficient algorithm, ϵ-Filtering, which, given an allowed approximation error ϵ, finds a (1-ϵ)–optimal filtering, i.e., the relevance of its solution is at least (1-ϵ) times the optimum. We conduct a comprehensive evaluation of the two proposed algorithms against state-of-the-art competitors on two real-world public datasets. Experimental results show that OPT-Filtering achieves a significant speedup of up to two orders of magnitude with respect to the existing optimal solution, while ϵ-Filtering further improves this result by trading effectiveness for efficiency. In particular, experiments show that ϵ-Filtering can achieve quasi-optimal solutions while being faster than all state-of-the-art competitors in most of the tested configurations.


Author(s):  
Vaishali S. Tidake ◽  
Shirish S. Sane

Usage of feature similarity is expected when the nearest neighbors are to be explored. Examples in multi-label datasets are associated with multiple labels. Hence, the use of label dissimilarity accompanied by feature similarity may reveal better neighbors. Information extracted from such neighbors is explored by devised MLFLD and MLFLD-MAXP algorithms. Among three distance metrics used for computation of label dissimilarity, Hamming distance has shown the most improved performance and hence used for further evaluation. The performance of implemented algorithms is compared with the state-of-the-art MLkNN algorithm. They showed an improvement for some datasets only. This chapter introduces parameters MLE and skew. MLE, skew, along with outlier parameter help to analyze multi-label and imbalanced nature of datasets. Investigation of datasets for various parameters and experimentation explored the need for data preprocessing for removing outliers. It revealed an improvement in the performance of implemented algorithms for all measures, and effectiveness is empirically validated.


1971 ◽  
Vol 5 (2) ◽  
pp. 72-82
Author(s):  
Walter F. Weiker

In a previous article I sought to appraise the field of Turkish studies, for the most part among western (predominantly American) scholars (MESA Bulletin, Vol. 3, No. 3, October 15, 1969). To fill out the picture, it is appropriate to also view the state of social research among the rapidly growing body of Turkish teachers and researchers. This article is not, however, a direct parallel to others in the MESA “State of the Art” series, in that it is not basically bibliographical. Such a review would require far more time, space, and knowledge in depth of several other social science disciplines than is currently available to me, because despite the remarks made below about problems of definition, the quantity and technical sophistication of work by Turkish researchers is quite large and is growing rapidly. Furthermore, since most of the research referred to below is in Turkish, the number of persons to whom a bibliographic review might be useful is quite limited. Instead, I think it would be more interesting to MESA members and other American social scientists to examine the characteristics and problems of what is probably one of the most vigorous social science communities in the “developing” countries, with a view (among other things) to helping facilitate increased cooperation between Turkish and American scholars in our common endeavors of advancing the state of knowledge.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Jorge A. Soria-Alcaraz ◽  
Gabriela Ochoa ◽  
Andres Espinal ◽  
Marco A. Sotelo-Figueroa ◽  
Manuel Ornelas-Rodriguez ◽  
...  

Selection hyper-heuristics are generic search tools that dynamically choose, from a given pool, the most promising operator (low-level heuristic) to apply at each iteration of the search process. The performance of these methods depends on the quality of the heuristic pool. Two types of heuristics can be part of the pool: diversification heuristics, which help to escape from local optima, and intensification heuristics, which effectively exploit promising regions in the vicinity of good solutions. An effective search strategy needs a balance between these two strategies. However, it is not straightforward to categorize an operator as intensification or diversification heuristic on complex domains. Therefore, we propose an automated methodology to do this classification. This brings methodological rigor to the configuration of an iterated local search hyper-heuristic featuring diversification and intensification stages. The methodology considers the empirical ranking of the heuristics based on an estimation of their capacity to either diversify or intensify the search. We incorporate the proposed approach into a state-of-the-art hyper-heuristic solving two domains: course timetabling and vehicle routing. Our results indicate improved performance, including new best-known solutions for the course timetabling problem.


Author(s):  
Youngmin Ro ◽  
Jongwon Choi ◽  
Dae Ung Jo ◽  
Byeongho Heo ◽  
Jongin Lim ◽  
...  

In person re-identification (ReID) task, because of its shortage of trainable dataset, it is common to utilize fine-tuning method using a classification network pre-trained on a large dataset. However, it is relatively difficult to sufficiently finetune the low-level layers of the network due to the gradient vanishing problem. In this work, we propose a novel fine-tuning strategy that allows low-level layers to be sufficiently trained by rolling back the weights of high-level layers to their initial pre-trained weights. Our strategy alleviates the problem of gradient vanishing in low-level layers and robustly trains the low-level layers to fit the ReID dataset, thereby increasing the performance of ReID tasks. The improved performance of the proposed strategy is validated via several experiments. Furthermore, without any addons such as pose estimation or segmentation, our strategy exhibits state-of-the-art performance using only vanilla deep convolutional neural network architecture.


Author(s):  
Nishil Talati ◽  
Heonjae Ha ◽  
Ben Perach ◽  
Ronny Ronen ◽  
Shahar Kvatinsky

While DRAM cannot easily scale below a 20nm technology node, RRAM suffers far less from scalability issues. Moreover, RRAM’s resistivity enables its use for processing-in-memory (PIM), potentially alleviating the von Neumann bottleneck. Unfortunately, because of technological idiosyncrasies, existing DRAM-centric memory controllers cannot exploit the full potential of RRAM. In this paper, we present the design of a memory controller called CONCEPT. The controller is optimized to exploit unique properties of RRAM to enhance its performance and energy efficiency as well as exploiting RRAM’s PIM capability. We show that with CONCEPT, RRAM can achieve DRAM-like performance and energy efficiency on SPEC CPU 2006 benchmarks. Furthermore, using RRAM PIM capabilities, we show a 5X performance gain on a data-intensive in-memory database workload compared to a state-of-the-art CPU-memory computing model.


Author(s):  
Elias B. Khalil ◽  
Bistra Dilkina ◽  
George L. Nemhauser ◽  
Shabbir Ahmed ◽  
Yufen Shao

``Primal heuristics'' are a key contributor to the improved performance of exact branch-and-bound solvers for combinatorial optimization and integer programming. Perhaps the most crucial question concerning primal heuristics is that of at which nodes they should run, to which the typical answer is via hard-coded rules or fixed solver parameters tuned, offline, by trial-and-error. Alternatively, a heuristic should be run when it is most likely to succeed, based on the problem instance's characteristics, the state of the search, etc. In this work, we study the problem of deciding at which node a heuristic should be run, such that the overall (primal) performance of the solver is optimized. To our knowledge, this is the first attempt at formalizing and systematically addressing this problem. Central to our approach is the use of Machine Learning (ML) for predicting whether a heuristic will succeed at a given node. We give a theoretical framework for analyzing this decision-making process in a simplified setting, propose a ML approach for modeling heuristic success likelihood, and design practical rules that leverage the ML models to dynamically decide whether to run a heuristic at each node of the search tree. Experimentally, our approach improves the primal performance of a state-of-the-art Mixed Integer Programming solver by up to 6% on a set of benchmark instances, and by up to 60% on a family of hard Independent Set instances.


OCL ◽  
2022 ◽  
Vol 29 ◽  
pp. 6
Author(s):  
Patrick Carré

In a context where the search for naturalness, the need to reduce the carbon footprint and the development of a decentralized crushing sector are intensifying, mechanical extraction is a technology that is regaining major importance for the industry. The performance of this technique remains far below what is desirable, while the understanding of the main phenomena involved in screw presses remains insufficient. This article, after a brief presentation of the state of the art of this discipline, presents a new model centered on the notions of pressure generation and plasticity. According to this approach, plasticity can account for parameters such as the water and oil content of oilseeds, their temperature, and their possible dehulling. Plasticity in turn would explain both the compressibility of the cake and its ability to resist the thrust of the screws, and consequently to generate pressure or to creep or flow backward depending on the geometry of the screw and the cage. The model must also incorporate the notions of compression velocity, friction, and the complexity of the interactions between these parameters and the impact of the succession of screw segments and cone rings. It has been built on observation and experience and gives an understanding of the need to work simultaneously on the conditioning and geometry of the presses to achieve improved performance in terms of energy, efficiency, and reduction of the temperatures experienced by the proteins and oils


Sign in / Sign up

Export Citation Format

Share Document