search space
Recently Published Documents


TOTAL DOCUMENTS

2993
(FIVE YEARS 1107)

H-INDEX

50
(FIVE YEARS 9)

2022 ◽  
Vol 2 (1) ◽  
pp. 1-29
Author(s):  
Sukrit Mittal ◽  
Dhish Kumar Saxena ◽  
Kalyanmoy Deb ◽  
Erik D. Goodman

Learning effective problem information from already explored search space in an optimization run, and utilizing it to improve the convergence of subsequent solutions, have represented important directions in Evolutionary Multi-objective Optimization (EMO) research. In this article, a machine learning (ML)-assisted approach is proposed that: (a) maps the solutions from earlier generations of an EMO run to the current non-dominated solutions in the decision space ; (b) learns the salient patterns in the mapping using an ML method, here an artificial neural network (ANN); and (c) uses the learned ML model to advance some of the subsequent offspring solutions in an adaptive manner. Such a multi-pronged approach, quite different from the popular surrogate-modeling methods, leads to what is here referred to as the Innovized Progress (IP) operator. On several test and engineering problems involving two and three objectives, with and without constraints, it is shown that an EMO algorithm assisted by the IP operator offers faster convergence behavior, compared to its base version independent of the IP operator. The results are encouraging, pave a new path for the performance improvement of EMO algorithms, and set the motivation for further exploration on more challenging problems.


Author(s):  
Tatsuya Hiraoka ◽  
Sho Takase ◽  
Kei Uchiumi ◽  
Atsushi Keyaki ◽  
Naoaki Okazaki

We propose a method to pay attention to high-order relations among latent states to improve the conventional HMMs that focus only on the latest latent state, since they assume Markov property. To address the high-order relations, we apply an RNN to each sequence of latent states, because the RNN can represent the information of an arbitrary-length sequence with their cell: a fixed-size vector. However, the simplest way, which provides all latent sequences explicitly for the RNN, is intractable due to the combinatorial explosion of the search space of latent states. Thus, we modify the RNN to represent the history of latent states from the beginning of the sequence to the current state with a fixed number of RNN cells whose number is equal to the number of possible states. We conduct experiments on unsupervised POS tagging and synthetic datasets. Experimental results show that the proposed method achieves better performance than previous methods. In addition, the results on the synthetic dataset indicate that the proposed method can capture the high-order relations.


2022 ◽  
Vol 13 (1) ◽  
pp. 1-22
Author(s):  
M. Saqib Nawaz ◽  
Philippe Fournier-Viger ◽  
Unil Yun ◽  
Youxi Wu ◽  
Wei Song

High utility itemset mining (HUIM) is the task of finding all items set, purchased together, that generate a high profit in a transaction database. In the past, several algorithms have been developed to mine high utility itemsets (HUIs). However, most of them cannot properly handle the exponential search space while finding HUIs when the size of the database and total number of items increases. Recently, evolutionary and heuristic algorithms were designed to mine HUIs, which provided considerable performance improvement. However, they can still have a long runtime and some may miss many HUIs. To address this problem, this article proposes two algorithms for HUIM based on Hill Climbing (HUIM-HC) and Simulated Annealing (HUIM-SA). Both algorithms transform the input database into a bitmap for efficient utility computation and for search space pruning. To improve population diversity, HUIs discovered by evolution are used as target values for the next population instead of keeping the current optimal values in the next population. Through experiments on real-life datasets, it was found that the proposed algorithms are faster than state-of-the-art heuristic and evolutionary HUIM algorithms, that HUIM-SA discovers similar HUIs, and that HUIM-SA evolves linearly with the number of iterations.


2022 ◽  
Vol 19 (1) ◽  
pp. 1-26
Author(s):  
Dennis Rieber ◽  
Axel Acosta ◽  
Holger Fröning

The success of Deep Artificial Neural Networks (DNNs) in many domains created a rich body of research concerned with hardware accelerators for compute-intensive DNN operators. However, implementing such operators efficiently with complex hardware intrinsics such as matrix multiply is a task not yet automated gracefully. Solving this task often requires joint program and data layout transformations. First solutions to this problem have been proposed, such as TVM, UNIT, or ISAMIR, which work on a loop-level representation of operators and specify data layout and possible program transformations before the embedding into the operator is performed. This top-down approach creates a tension between exploration range and search space complexity, especially when also exploring data layout transformations such as im2col, channel packing, or padding. In this work, we propose a new approach to this problem. We created a bottom-up method that allows the joint transformation of both computation and data layout based on the found embedding. By formulating the embedding as a constraint satisfaction problem over the scalar dataflow, every possible embedding solution is contained in the search space. Adding additional constraints and optimization targets to the solver generates the subset of preferable solutions. An evaluation using the VTA hardware accelerator with the Baidu DeepBench inference benchmark shows that our approach can automatically generate code competitive to reference implementations. Further, we show that dynamically determining the data layout based on intrinsic and workload is beneficial for hardware utilization and performance. In cases where the reference implementation has low hardware utilization due to its fixed deployment strategy, we achieve a geomean speedup of up to × 2.813, while individual operators can improve as much as × 170.


2022 ◽  
Vol 19 (1) ◽  
pp. 1-25
Author(s):  
Hongzhi Liu ◽  
Jie Luo ◽  
Ying Li ◽  
Zhonghai Wu

Pass selection and phase ordering are two critical compiler auto-tuning problems. Traditional heuristic methods cannot effectively address these NP-hard problems especially given the increasing number of compiler passes and diverse hardware architectures. Recent research efforts have attempted to address these problems through machine learning. However, the large search space of candidate pass sequences, the large numbers of redundant and irrelevant features, and the lack of training program instances make it difficult to learn models well. Several methods have tried to use expert knowledge to simplify the problems, such as using only the compiler passes or subsequences in the standard levels (e.g., -O1, -O2, and -O3) provided by compiler designers. However, these methods ignore other useful compiler passes that are not contained in the standard levels. Principal component analysis (PCA) and exploratory factor analysis (EFA) have been utilized to reduce the redundancy of feature data. However, these unsupervised methods retain all the information irrelevant to the performance of compilation optimization, which may mislead the subsequent model learning. To solve these problems, we propose a compiler pass selection and phase ordering approach, called Iterative Compilation based on Metric learning and Collaborative filtering (ICMC) . First, we propose a data-driven method to construct pass subsequences according to the observed collaborative interactions and dependency among passes on a given program set. Therefore, we can make use of all available compiler passes and prune the search space. Then, a supervised metric learning method is utilized to retain useful feature information for compilation optimization while removing both the irrelevant and the redundant information. Based on the learned similarity metric, a neighborhood-based collaborative filtering method is employed to iteratively recommend a few superior compiler passes for each target program. Last, an iterative data enhancement method is designed to alleviate the problem of lacking training program instances and to enhance the performance of iterative pass recommendations. The experimental results using the LLVM compiler on all 32 cBench programs show the following: (1) ICMC significantly outperforms several state-of-the-art compiler phase ordering methods, (2) it performs the same or better than the standard level -O3 on all the test programs, and (3) it can reach an average performance speedup of 1.20 (up to 1.46) compared with the standard level -O3.


2022 ◽  
Vol 19 (1) ◽  
pp. 1-21
Author(s):  
Daeyeal Lee ◽  
Bill Lin ◽  
Chung-Kuan Cheng

SMART NoCs achieve ultra-low latency by enabling single-cycle multiple-hop transmission via bypass channels. However, contention along bypass channels can seriously degrade the performance of SMART NoCs by breaking the bypass paths. Therefore, contention-free task mapping and scheduling are essential for optimal system performance. In this article, we propose an SMT (Satisfiability Modulo Theories)-based framework to find optimal contention-free task mappings with minimum application schedule lengths on 2D/3D SMART NoCs with mixed dimension-order routing. On top of SMT’s fast reasoning capability for conditional constraints, we develop efficient search-space reduction techniques to achieve practical scalability. Experiments demonstrate that our SMT framework achieves 10× higher scalability than ILP (Integer Linear Programming) with 931.1× (ranges from 2.2× to 1532.1×) and 1237.1× (ranges from 4× to 4373.8×) faster average runtimes for finding optimum solutions on 2D and 3D SMART NoCs and our 2D and 3D extensions of the SMT framework with mixed dimension-order routing also maintain the improved scalability with the extended and diversified routing paths, resulting in reduced application schedule lengths throughout various application benchmarks.


Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 274
Author(s):  
Álvaro Gómez-Rubio ◽  
Ricardo Soto ◽  
Broderick Crawford ◽  
Adrián Jaramillo ◽  
David Mancilla ◽  
...  

In the world of optimization, especially concerning metaheuristics, solving complex problems represented by applying big data and constraint instances can be difficult. This is mainly due to the difficulty of implementing efficient solutions that can solve complex optimization problems in adequate time, which do exist in different industries. Big data has demonstrated its efficiency in solving different concerns in information management. In this paper, an approach based on multiprocessing is proposed wherein clusterization and parallelism are used together to improve the search process of metaheuristics when solving large instances of complex optimization problems, incorporating collaborative elements that enhance the quality of the solution. The proposal deals with machine learning algorithms to improve the segmentation of the search space. Particularly, two different clustering methods belonging to automatic learning techniques, are implemented on bio-inspired algorithms to smartly initialize their solution population, and then organize the resolution from the beginning of the search. The results show that this approach is competitive with other techniques in solving a large set of cases of a well-known NP-hard problem without incorporating too much additional complexity into the metaheuristic algorithms.


2022 ◽  
Author(s):  
Thomson Mtonga ◽  
Keren K. Kaberere ◽  
George Kimani Irungu

<div>The installation of shunt capacitors in radial distribution systems leads to reduced branch power flows, branch currents, branch power losses and voltage drops. Consequently, this results in improved voltage profiles and voltage stability margins. However, for efficient attainment of the stated benefits, the shunt capacitors ought to be installed in an optimal manner, that is, optimally sized shunt capacitors need to be installed at the optimum buses of an electrical system. This article proposes a novel approach for optimizing the placement and sizing of shunt capacitors in radial distribution systems with a focus on minimizing the cost of active power losses and shunt capacitors’ purchase, installation, operation and maintenance. To reduce the search space, hence the computation time, the prroposed approach starts the search process by arranging the buses of the radial distribution system under consideration in pairs. Thereafter, these pairs influence each other to determine the optimum total number of buses to be compensated. The proposed approach was tested on the 34- and 85-bus radial distribution systems and when the simulation results were compared with those obtained by other approaches, it was established that the developed approach was a better option because it gave the least cost.</div>


2022 ◽  
Author(s):  
Thomson Mtonga ◽  
Keren K. Kaberere ◽  
George Kimani Irungu

<div>The installation of shunt capacitors in radial distribution systems leads to reduced branch power flows, branch currents, branch power losses and voltage drops. Consequently, this results in improved voltage profiles and voltage stability margins. However, for efficient attainment of the stated benefits, the shunt capacitors ought to be installed in an optimal manner, that is, optimally sized shunt capacitors need to be installed at the optimum buses of an electrical system. This article proposes a novel approach for optimizing the placement and sizing of shunt capacitors in radial distribution systems with a focus on minimizing the cost of active power losses and shunt capacitors’ purchase, installation, operation and maintenance. To reduce the search space, hence the computation time, the prroposed approach starts the search process by arranging the buses of the radial distribution system under consideration in pairs. Thereafter, these pairs influence each other to determine the optimum total number of buses to be compensated. The proposed approach was tested on the 34- and 85-bus radial distribution systems and when the simulation results were compared with those obtained by other approaches, it was established that the developed approach was a better option because it gave the least cost.</div>


Sign in / Sign up

Export Citation Format

Share Document