scholarly journals Analysis of Portfolio-Style Parallel SAT Solving on Current Multi-Core Architectures

10.29007/73n4 ◽  
2018 ◽  
Author(s):  
Martin Aigner ◽  
Armin Biere ◽  
Christoph Kirsch ◽  
Aina Niemetz ◽  
Mathias Preiner

Effectively parallelizing SAT solving is an open andimportant issue. The current state-of-the-art isbased on parallel portfolios. This technique relieson running multiple solvers on the same instance inparallel. As soon one instance finishes the entirerun stops. Several succesful systems even use plainparallel portfolio (PPP), where the individual solversdo not exchange any information. This paper containsa thorough experimental evaluation which shows that PPPcan improve wall-clock running time because memory accessis still local, respectively the memory system can hidethe latency of memory access. In particular, there doesnot seem as much cache congestion as one might imagine.We also present some limits on the scalibility of PPP.Thus this paper gives one argument why PPP solvers are agood fit for todays multi-core architectures.

2021 ◽  
Vol 15 (6) ◽  
pp. 1-27
Author(s):  
Marco Bressan ◽  
Stefano Leucci ◽  
Alessandro Panconesi

We address the problem of computing the distribution of induced connected subgraphs, aka graphlets or motifs , in large graphs. The current state-of-the-art algorithms estimate the motif counts via uniform sampling by leveraging the color coding technique by Alon, Yuster, and Zwick. In this work, we extend the applicability of this approach by introducing a set of algorithmic optimizations and techniques that reduce the running time and space usage of color coding and improve the accuracy of the counts. To this end, we first show how to optimize color coding to efficiently build a compact table of a representative subsample of all graphlets in the input graph. For 8-node motifs, we can build such a table in one hour for a graph with 65M nodes and 1.8B edges, which is times larger than the state of the art. We then introduce a novel adaptive sampling scheme that breaks the “additive error barrier” of uniform sampling, guaranteeing multiplicative approximations instead of just additive ones. This allows us to count not only the most frequent motifs, but also extremely rare ones. For instance, on one graph we accurately count nearly 10.000 distinct 8-node motifs whose relative frequency is so small that uniform sampling would literally take centuries to find them. Our results show that color coding is still the most promising approach to scalable motif counting.


2018 ◽  
Vol 27 (07) ◽  
pp. 1860013 ◽  
Author(s):  
Swair Shah ◽  
Baokun He ◽  
Crystal Maung ◽  
Haim Schweitzer

Principal Component Analysis (PCA) is a classical dimensionality reduction technique that computes a low rank representation of the data. Recent studies have shown how to compute this low rank representation from most of the data, excluding a small amount of outlier data. We show how to convert this problem into graph search, and describe an algorithm that solves this problem optimally by applying a variant of the A* algorithm to search for the outliers. The results obtained by our algorithm are optimal in terms of accuracy, and are shown to be more accurate than results obtained by the current state-of-the- art algorithms which are shown not to be optimal. This comes at the cost of running time, which is typically slower than the current state of the art. We also describe a related variant of the A* algorithm that runs much faster than the optimal variant and produces a solution that is guaranteed to be near the optimal. This variant is shown experimentally to be more accurate than the current state-of-the-art and has a comparable running time.


1978 ◽  
Vol 3 (3) ◽  
pp. 148-159 ◽  
Author(s):  
Howard S. Adelman

Presented are (1) a brief synthesis of several key conceptual and methodological concerns and some ethical perspectives related to identification of psycho-educational problems and (2) conclusions regarding the current state of the art. The conceptual discussion focuses on differentiating prediction from identification and screening from diagnosis; three models used in developing assessment procedures also are presented. Methodologically, the minimal requirements for satisfactory research are described and current problems are highlighted. Three ethical perspectives are discussed; cost-benefit for the individual, models-motives-goals underlying practices, and cost-benefit for the culture. The current state of the art is seen as not supporting the efficacy of the widespread use of currently available procedures for mass screening. Given this point and the methodological and ethical concerns discussed, it is suggested that policy makers reallocate limited resources away from mass identification and toward health maintenance and other approaches to prevention and early-age intervention.


Author(s):  
A. H. Fink

This paper pertains specifically to refinery fluid catalytic cracking and associated power-recovery concepts. The several systems described go beyond basic onsite FCC practices previously used. However, no special technical development or prototypes would be required to engineer practical and successful installations. All component equipment and apparatus reflect current state-of-the-art, requiring only explicit economic justification. The individual systems, as presented, are solely conceptual, but sufficient detail is provided to confirm their technical feasibility. Application economics will depend on geographic location, site conditions and the specific process installation.


2020 ◽  
Vol 2020 (3) ◽  
pp. 42-61
Author(s):  
Hayim Shaul ◽  
Dan Feldman ◽  
Daniela Rus

AbstractThe k-nearest neighbors (kNN) classifier predicts a class of a query, q, by taking the majority class of its k neighbors in an existing (already classified) database, S. In secure kNN, q and S are owned by two different parties and q is classified without sharing data. In this work we present a classifier based on kNN, that is more efficient to implement with homomorphic encryption (HE). The efficiency of our classifier comes from a relaxation we make to consider κ nearest neighbors for κ ≈k with probability that increases as the statistical distance between Gaussian and the distribution of the distances from q to S decreases. We call our classifier k-ish Nearest Neighbors (k-ish NN). For the implementation we introduce double-blinded coin-toss where the bias and output of the toss are encrypted. We use it to approximate the average and variance of the distances from q to S in a scalable circuit whose depth is independent of |S|. We believe these to be of independent interest. We implemented our classifier in an open source library based on HElib and tested it on a breast tumor database. Our classifier has accuracy and running time comparable to current state of the art (non-HE) MPC solution that have better running time but worse communication complexity. It also has communication complexity similar to naive HE implementation that have worse running time.


Author(s):  
Johan K. Westin ◽  
Jayanta S. Kapat ◽  
Louis C. Chow

Current state-of-the-art thermoregulatory models do not predict body temperatures with the accuracies that are required for the development of automatic cooling control in liquid cooling garment (LCG) systems. Automatic cooling control would be beneficial in a variety of space, aviation, military, and industrial environments for optimizing cooling efficiency, for making LCGs as portable and practical as possible, for alleviating the individual from manual cooling control, and for improving thermal comfort and cognitive performance. In this paper, we adopt the Fiala thermoregulatory model, which has previously demonstrated state-of-the-art predictive abilities in air environments, for use in LCG environments. We compare the model’s tissue temperature predictions with analytical solutions to the bioheat equation, and with experimental data for a 700 W rectangular type activity schedule. The thermoregulatory model predicts rectal temperature, mean skin temperature, and body heat storage (BHS) with mean absolute errors of 0.13°C, 0.95°C, and 11.9 W·hr, respectively. Even though these accuracies are within state-of-the-art variations, the model does not satisfy the target BHS accuracy of ±6.5 W·hr. We identify model deficiencies, which will be addressed in future studies in order to achieve the strict BHS accuracy that is needed for automatic cooling control development.


Water Policy ◽  
2013 ◽  
Vol 15 (S2) ◽  
pp. 1-14 ◽  
Author(s):  
Uta Wehn de Montalvo ◽  
Guy Alaerts

Water management is particularly dependent on strong capacity, a solid knowledge base and awareness at all levels, including those of the individual, the organization, the sector institutions and the ‘enabling environment’. Yet getting all levels to operate in a coherent manner is challenging, and requires vision and leadership. This special issue seeks to further the understanding of leadership in knowledge and capacity development in the water sector but its theoretical and methodological insights will be of interest beyond that arena. This paper presents an introduction to the special issue which resulted from selected papers presented at the 5th Delft Symposium on Water Sector Capacity Development held in Delft, The Netherlands. Collectively, the contributions examine knowledge and capacity development in both the water services and water resources sub-sectors. In order to be linked well to current local realities, the papers rely on both academic analyses based on empirical research as well as practitioners' accounts based on their professional experience. Together, the papers in this special issue and the insights from the recent Symposium summarized in this editorial introduction present an overview of the current state of the art in knowledge and capacity development in the water sector. The paper raises salient policy implications and outlines a research agenda for knowledge and capacity development in the water sector and beyond.


2021 ◽  
Vol 11 (20) ◽  
pp. 9495
Author(s):  
Tadeusz Tomczak

The performance of lattice–Boltzmann solver implementations usually depends mainly on memory access patterns. Achieving high performance requires then complex code which handles careful data placement and ordering of memory transactions. In this work, we analyse the performance of an implementation based on a new approach called the data-oriented language, which allows the combination of complex memory access patterns with simple source code. As a use case, we present and provide the source code of a solver for D2Q9 lattice and show its performance on GTX Titan Xp GPU for dense and sparse geometries up to 40962 nodes. The obtained results are promising, around 1000 lines of code allowed us to achieve performance in the range of 0.6 to 0.7 of maximum theoretical memory bandwidth (over 2.5 and 5.0 GLUPS for double and single precision, respectively) for meshes of sizes above 10242 nodes, which is close to the current state-of-the-art. However, we also observed relatively high and sometimes difficult to predict overheads, especially for sparse data structures. The additional issue was also a rather long compilation, which extended the time of short simulations, and a lack of access to low-level optimisation mechanisms.


1995 ◽  
Vol 38 (5) ◽  
pp. 1126-1142 ◽  
Author(s):  
Jeffrey W. Gilger

This paper is an introduction to behavioral genetics for researchers and practioners in language development and disorders. The specific aims are to illustrate some essential concepts and to show how behavioral genetic research can be applied to the language sciences. Past genetic research on language-related traits has tended to focus on simple etiology (i.e., the heritability or familiality of language skills). The current state of the art, however, suggests that great promise lies in addressing more complex questions through behavioral genetic paradigms. In terms of future goals it is suggested that: (a) more behavioral genetic work of all types should be done—including replications and expansions of preliminary studies already in print; (b) work should focus on fine-grained, theory-based phenotypes with research designs that can address complex questions in language development; and (c) work in this area should utilize a variety of samples and methods (e.g., twin and family samples, heritability and segregation analyses, linkage and association tests, etc.).


1976 ◽  
Vol 21 (7) ◽  
pp. 497-498
Author(s):  
STANLEY GRAND

Sign in / Sign up

Export Citation Format

Share Document