enumeration algorithm
Recently Published Documents


TOTAL DOCUMENTS

158
(FIVE YEARS 31)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
Andreas Wiemers ◽  
Johannes Mittmann

AbstractRecent publications consider side-channel attacks against the key schedule of the Data Encryption Standard (DES). These publications identify a leakage model depending on the XOR of register values in the DES key schedule. Building on this leakage model, we first revisit a discrete model which assumes that the Hamming distances between subsequent round keys leak without error. We analyze this model formally and provide theoretical explanations for observations made in previous works. Next we examine a continuous model which considers more points of interest and also takes noise into account. The model gives rise to an evaluation function for key candidates and an associated notion of key ranking. We develop an algorithm for enumerating key candidates up to a desired rank which is based on the Fincke–Pohst lattice point enumeration algorithm. We derive information-theoretic bounds and estimates for the remaining entropy and compare them with our experimental results. We apply our attack to side-channel measurements of a security controller. Using our enumeration algorithm we are able to significantly improve the results reported previously for the same measurement data.


2021 ◽  
Vol 55 (5) ◽  
pp. 1136-1150
Author(s):  
Giovanni Righini

The single source Weber problem with limited distances (SSWPLD) is a continuous optimization problem in location theory. The SSWPLD algorithms proposed so far are based on the enumeration of all regions of [Formula: see text] defined by a given set of n intersecting circumferences. Early algorithms require [Formula: see text] time for the enumeration, but they were recently shown to be incorrect in case of degenerate intersections, that is, when three or more circumferences pass through the same intersection point. This problem was fixed by a modified enumeration algorithm with complexity [Formula: see text], based on the construction of neighborhoods of degenerate intersection points. In this paper, it is shown that the complexity for correctly dealing with degenerate intersections can be reduced to [Formula: see text] so that existing enumeration algorithms can be fixed without increasing their [Formula: see text] time complexity, which is due to some preliminary computations unrelated to intersection degeneracy. Furthermore, a new algorithm for enumerating all regions to solve the SSWPLD is described: its worst-case time complexity is [Formula: see text]. The new algorithm also guarantees that the regions are enumerated only once.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-26
Author(s):  
Kai Liu ◽  
Hongbo Liu ◽  
Tomas E. Ward ◽  
Hua Wang ◽  
Yu Yang ◽  
...  

Detecting self-organized coalitions from functional networks is one of the most important ways to uncover functional mechanisms in the brain. Determining these raises well-known technical challenges in terms of scale imbalance, outliers and hard-examples. In this article, we propose a novel self-adaptive skeleton approach to detect coalitions through an approximation method based on probabilistic mixture models. The nodes in the networks are characterized in terms of robust k -order complete subgraphs ( k -clique ) as essential substructures. The k -clique enumeration algorithm quickly enumerates all k -cliques in a parallel manner for a given network. Then, the cliques, from max -clique down to min -clique, of each order k , are hierarchically embedded into a probabilistic mixture model. They are self-adapted to the corresponding structure density of coalitions in the brain functional networks through different order k . All the cliques are merged and evolved into robust skeletons to sustain each unbalanced coalition by eliminating outliers and separating overlaps. We call this the k -CLIque Merging Evolution (CLIME) algorithm. The experimental results illustrate that the proposed approaches are robust to density variation and coalition mixture and can enable the effective detection of coalitions from real brain functional networks. There exist potential cognitive functional relations between the regions of interest in the coalitions revealed by our methods, which suggests the approach can be usefully applied in neuroscientific studies.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Kerang Cao ◽  
Xin Chen ◽  
Kwang-nam Choi ◽  
Yage Liang ◽  
Qian Miao ◽  
...  

In this note, we revisit two types of scheduling problem with weighted early/late work criteria and a common due date. For parallel identical machines environment, we present a dynamic programming approach running in pseudopolynomial time, to classify the considered problem into the set of binary NP-hard. We also propose an enumeration algorithm for comparison. For two-machine flow shop systems, we focus on a previous dynamic programming method, but with a more precise analysis, to improve the practical performance during its execution. For each model, we verify our studies through computational experiments, in which we show the advantages of our techniques, respectively.


Author(s):  
José André Brito ◽  
Leonardo de Lima ◽  
Pedro Henrique González ◽  
Breno Oliveira ◽  
Nelson Maculan

The problem of finding an optimal sample stratification has been extensively studied in the literature. In this paper, we propose a heuristic optimization method for solving the univariate optimum stratification problem aiming at minimizing the sample size for a given precision level. The method is based on the variable neighborhood search metaheuristic, which was combined with an exact method. Numerical experiments were performed over a dataset of 24 instances, and the results of the proposed algorithm were compared with two very well-known methods from the literature. Our results outperformed $94\%$ of  the considered cases. In addition, we developed an enumeration algorithm to find the global optimal solution in some populations and scenarios, which enabled us to validate our metaheuristic method. Furthermore, we find that our algorithm obtained the global optimal solutions for the vast majority of the cases.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rohit R. Ghadge ◽  
Prakash S.

Purpose This paper aims to focus on calculating the number of layers of composite laminates required to take the applied load made up of graphite/epoxy (AS4/3501-6) which can be used in many industrial applications. Optimization for minimization of weight by variation in the mechanical properties is possible by using different combinations of fiber angle, number of plies and their stacking sequence. Design/methodology/approach Lots of research studies have been put forth by aerospace industry experts to improve the performance of aircraft wings with weight constraints. The orthotropic nature of the laminated composites and their ability to characterize as per various performance requirements of aerospace industry make them the most suitable material. This leads to necessity of implementing most appropriate optimization technique for selecting appropriate parameter sets and material configurations. Findings In this work, exhaustive enumeration algorithm has been applied for weight minimization of fiber laminated composite beam subjected to two different loading conditions by computing overall possible stacking sequences and material properties using classical laminate theory. This combinatorial type optimization technique enumerates all possible solutions with an assurance of getting global optimum solution. Stacking sequences are filtered through Tsai-Wu failure criteria. Originality/value Finally, through the outcome of this optimization framework, eight different combinations of stacking sequences and 24-ply symmetric layup have been obtained. Furthermore, this 24-ply layup weighing 0.468 kg has been validated using finite element solver for given boundary conditions. Interlaminar stresses at top and bottom of the optimized ply layup were validated with Autodesk’s Helius composites solver.


2021 ◽  
Vol 6 (1) ◽  
pp. 86-101
Author(s):  
Hai Lan ◽  
Zhifeng Bao ◽  
Yuwei Peng

AbstractQuery optimizer is at the heart of the database systems. Cost-based optimizer studied in this paper is adopted in almost all current database systems. A cost-based optimizer introduces a plan enumeration algorithm to find a (sub)plan, and then uses a cost model to obtain the cost of that plan, and selects the plan with the lowest cost. In the cost model, cardinality, the number of tuples through an operator, plays a crucial role. Due to the inaccuracy in cardinality estimation, errors in cost model, and the huge plan space, the optimizer cannot find the optimal execution plan for a complex query in a reasonable time. In this paper, we first deeply study the causes behind the limitations above. Next, we review the techniques used to improve the quality of the three key components in the cost-based optimizer, cardinality estimation, cost model, and plan enumeration. We also provide our insights on the future directions for each of the above aspects.


Sign in / Sign up

Export Citation Format

Share Document