search spaces
Recently Published Documents


TOTAL DOCUMENTS

314
(FIVE YEARS 86)

H-INDEX

22
(FIVE YEARS 4)

Author(s):  
Nesma Youssef ◽  
Hatem Abdulkader ◽  
Amira Abdelwahab

Sequential rule mining is one of the most common data mining techniques. It intends to find desired rules in large sequence databases. It can decide the essential information that helps acquire knowledge from large search spaces and select curiously rules from sequence databases. The key challenge is to avoid wasting time, which is particularly difficult in large sequence databases. This paper studies the mining rules from two representations of sequential patterns to have compact databases without affecting the final result. In addition, execute a parallel approach by utilizing multi core processor architecture for mining non-redundant sequential rules. Also, perform pruning techniques to enhance the efficiency of the generated rules. The evaluation of the proposed algorithm was accomplished by comparing it with another non-redundant sequential rule algorithm called Non-Redundant with Dynamic Bit Vector (NRD-DBV). Both algorithms were performed on four real datasets with different characteristics. Our experiments show the performance of the proposed algorithm in terms of execution time and computational cost. It achieves the highest efficiency, especially for large datasets and with low values of minimum support, as it takes approximately half the time consumed by the compared algorithm.


2021 ◽  
Vol 1 (4) ◽  
pp. 1-43
Author(s):  
Benjamin Doerr ◽  
Frank Neumann

The theory of evolutionary computation for discrete search spaces has made significant progress since the early 2010s. This survey summarizes some of the most important recent results in this research area. It discusses fine-grained models of runtime analysis of evolutionary algorithms, highlights recent theoretical insights on parameter tuning and parameter control, and summarizes the latest advances for stochastic and dynamic problems. We regard how evolutionary algorithms optimize submodular functions, and we give an overview over the large body of recent results on estimation of distribution algorithms. Finally, we present the state of the art of drift analysis, one of the most powerful analysis technique developed in this field.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 47
Author(s):  
Ajitha K. B. Shenoy ◽  
Smitha N. Pai

The structural property of the search graph plays an important role in the success of local search-based metaheuristic algorithms. Magnification is one of the structural properties of the search graph. This study builds the relationship between the magnification of a search graph and the mixing time of Markov Chain (MC) induced by the local search-based metaheuristics on that search space. The result shows that the ergodic reversible Markov chain induced by the local search-based metaheuristics is inversely proportional to magnification. This result indicates that it is desirable to use a search space with large magnification for the optimization problem in hand rather than using any search spaces. The performance of local search-based metaheuristics may be good on such search spaces since the mixing time of the underlying Markov chain is inversely proportional to the magnification of search space. Using these relations, this work shows that MC induced by the Metropolis Algorithm (MA) mixes rapidly if the search graph has a large magnification. This indicates that for any combinatorial optimization problem, the Markov chains associated with the MA mix rapidly i.e., in polynomial time if the underlying search graph has large magnification. The usefulness of the obtained results is illustrated using the 0/1-Knapsack Problem, which is a well-studied combinatorial optimization problem in the literature and is NP-Complete. Using the theoretical results obtained, this work shows that Markov Chains (MCs) associated with the local search-based metaheuristics like random walk and MA for 0/1-Knapsack Problem mixes rapidly.


Author(s):  
Bingqian Lu ◽  
Jianyi Yang ◽  
Weiwen Jiang ◽  
Yiyu Shi ◽  
Shaolei Ren

Convolutional neural networks (CNNs) are used in numerous real-world applications such as vision-based autonomous driving and video content analysis. To run CNN inference on various target devices, hardware-aware neural architecture search (NAS) is crucial. A key requirement of efficient hardware-aware NAS is the fast evaluation of inference latencies in order to rank different architectures. While building a latency predictor for each target device has been commonly used in state of the art, this is a very time-consuming process, lacking scalability in the presence of extremely diverse devices. In this work, we address the scalability challenge by exploiting latency monotonicity --- the architecture latency rankings on different devices are often correlated. When strong latency monotonicity exists, we can re-use architectures searched for one proxy device on new target devices, without losing optimality. In the absence of strong latency monotonicity, we propose an efficient proxy adaptation technique to significantly boost the latency monotonicity. Finally, we validate our approach and conduct experiments with devices of different platforms on multiple mainstream search spaces, including MobileNet-V2, MobileNet-V3, NAS-Bench-201, ProxylessNAS and FBNet. Our results highlight that, by using just one proxy device, we can find almost the same Pareto-optimal architectures as the existing per-device NAS, while avoiding the prohibitive cost of building a latency predictor for each device.


2021 ◽  
Vol 11 (23) ◽  
pp. 11517
Author(s):  
Fu-I Chou ◽  
Tian-Hsiang Huang ◽  
Po-Yuan Yang ◽  
Chin-Hsuan Lin ◽  
Tzu-Chao Lin ◽  
...  

This study proposes a method to improve fractional-order particle swarm optimizer to overcome the shortcomings of traditional swarm algorithms, such as low search accuracy in a high-dimensional space, falling into local minimums, and nonrobust results. In natural phenomena, our controllable fractional-order particle swarm optimizer can explore search spaces in detail to obtain high resolutions. Moreover, the proposed algorithm is memorable, i.e., position updates focus on the particle position of previous and last generations, rendering it conservative when updating the position, and obtained results are robust. For verifying the algorithm’s effectiveness, 11 test functions compare the average value, overall best value, and standard deviation of the controllable fractional-order particle swarm optimizer and controllable particle swarm optimizer; experimental results show that the stability of the former is better than the latter. Furthermore, the solution position found by the controllable fractional-order particle swarm optimizer is more reliable. Therefore, the improved method proposed herein is effective. Moreover, this research describes how a heart disease prediction application uses the optimizer we proposed to optimize XGBoost hyperparameters with custom target values. The final verification of the obtained prediction model is effective and reliable, which shows the controllability of our proposed fractional-order particle swarm optimizer.


Author(s):  
Shengran Hu ◽  
Ran Cheng ◽  
Cheng He ◽  
Zhichao Lu ◽  
Jing Wang ◽  
...  

AbstractFor the goal of automated design of high-performance deep convolutional neural networks (CNNs), neural architecture search (NAS) methodology is becoming increasingly important for both academia and industries. Due to the costly stochastic gradient descent training of CNNs for performance evaluation, most existing NAS methods are computationally expensive for real-world deployments. To address this issue, we first introduce a new performance estimation metric, named random-weight evaluation (RWE) to quantify the quality of CNNs in a cost-efficient manner. Instead of fully training the entire CNN, the RWE only trains its last layer and leaves the remainders with randomly initialized weights, which results in a single network evaluation in seconds. Second, a complexity metric is adopted for multi-objective NAS to balance the model size and performance. Overall, our proposed method obtains a set of efficient models with state-of-the-art performance in two real-world search spaces. Then the results obtained on the CIFAR-10 dataset are transferred to the ImageNet dataset to validate the practicality of the proposed algorithm. Moreover, ablation studies on NAS-Bench-301 datasets reveal the effectiveness of the proposed RWE in estimating the performance compared to existing methods.


2021 ◽  
Author(s):  
Tom Altenburg ◽  
Thilo Muth ◽  
Bernhard Y. Renard

AbstractMass spectrometry-based proteomics allows to study all proteins of a sample on a molecular level. The ever increasing complexity and amount of proteomics MS-data requires powerful and yet efficient computational and statistical analysis. In particular, most recent bottom-up MS-based proteomics studies consider either a diverse pool of post-translational modifications, employ large databases – as in metaproteomics or proteogenomics, contain multiple isoforms of proteins, include unspecific cleavage sites or even combinations thereof and thus face a computationally challenging situation regarding protein identification. In order to cope with resulting large search spaces, we present a deep learning approach that jointly embeds MS/MS spectra and peptides into the same vector space such that embeddings can be compared easily and interchangeable by using euclidean distances. In contrast to existing spectrum embedding techniques, ours are learned jointly with their respective peptides and thus remain meaningful. By visualizing the learned manifold of both spectrum and peptide embeddings in correspondence to their physicochemical properties our approach becomes easily interpretable. At the same time, our joint embeddings blur the lines between spectra and protein sequences, providing a powerful framework for peptide identification. In particular, we build an open search, which allows to search multiple ten-thousands of spectra against millions of peptides within seconds. yHydra achieves identification rates that are compatible with MSFragger. Due to the open search, delta masses are assigned to each identification which allows to unrestrictedly characterize post-translational modifications. Meaningful joint embeddings allow for faster open searches and generally make downstream analysis efficient and convenient for example for integration with other omics types.Availabilityupon [email protected]


2021 ◽  
Author(s):  
Manuel Fritz ◽  
Michael Behringer ◽  
Dennis Tschechlov ◽  
Holger Schwarz

AbstractClustering is a fundamental primitive in manifold applications. In order to achieve valuable results in exploratory clustering analyses, parameters of the clustering algorithm have to be set appropriately, which is a tremendous pitfall. We observe multiple challenges for large-scale exploration processes. On the one hand, they require specific methods to efficiently explore large parameter search spaces. On the other hand, they often exhibit large runtimes, in particular when large datasets are analyzed using clustering algorithms with super-polynomial runtimes, which repeatedly need to be executed within exploratory clustering analyses. We address these challenges as follows: First, we present LOG-Means and show that it provides estimates for the number of clusters in sublinear time regarding the defined search space, i.e., provably requiring less executions of a clustering algorithm than existing methods. Second, we demonstrate how to exploit fundamental characteristics of exploratory clustering analyses in order to significantly accelerate the (repetitive) execution of clustering algorithms on large datasets. Third, we show how these challenges can be tackled at the same time. To the best of our knowledge, this is the first work which simultaneously addresses the above-mentioned challenges. In our comprehensive evaluation, we unveil that our proposed methods significantly outperform state-of-the-art methods, thus especially supporting novice analysts for exploratory clustering analyses in large-scale exploration processes.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7948
Author(s):  
Ya-Ju Yu ◽  
Yu-Hsiang Huang ◽  
Yuan-Yao Shih

Before each user equipment (UE) can send data using the narrowband physical uplink shared channel (NPUSCH), each UE should periodically monitor a search space in the narrowband physical downlink control channel (NPDCCH) to decode a downlink control indicator (DCI) over narrowband Internet of Things (NB-IoT). This monitoring period, called the NPDCCH period in NB-IoT, can be flexibly adjusted for UEs with different channel qualities. However, because low-cost NB-IoT UEs operate in the half-duplex mode, they cannot monitor search spaces in NPDCCHs and transmit data in the NPUSCH simultaneously. Thus, as we observed, a percentage of uplink subframes will be wasted when UEs monitor search spaces in NPDCCHs, and the wasted percentage is higher when the monitored period is shorter. In this paper, to address this issue, we formulate the cross-cycled resource allocation problem to reduce the consumed subframes while satisfying the uplink data requirement of each UE. We then propose a cross-cycled uplink resource allocation algorithm to efficiently use the originally unusable NPUSCH subframes to increase resource utilization. Compared with the two resource allocation algorithms, the simulation results verify our motivation of using the cross-cycled radio resources to achieve massive connections over NB-IoT, especially for UEs with high channel qualities. The results also showcase the efficiency of the proposed algorithm, which can be flexibly applied for more different NPDCCH periods.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3011
Author(s):  
Drishti Yadav

This paper introduces a novel population-based bio-inspired meta-heuristic optimization algorithm, called Blood Coagulation Algorithm (BCA). BCA derives inspiration from the process of blood coagulation in the human body. The underlying concepts and ideas behind the proposed algorithm are the cooperative behavior of thrombocytes and their intelligent strategy of clot formation. These behaviors are modeled and utilized to underscore intensification and diversification in a given search space. A comparison with various state-of-the-art meta-heuristic algorithms over a test suite of 23 renowned benchmark functions reflects the efficiency of BCA. An extensive investigation is conducted to analyze the performance, convergence behavior and computational complexity of BCA. The comparative study and statistical test analysis demonstrate that BCA offers very competitive and statistically significant results compared to other eminent meta-heuristic algorithms. Experimental results also show the consistent performance of BCA in high dimensional search spaces. Furthermore, we demonstrate the applicability of BCA on real-world applications by solving several real-life engineering problems.


Sign in / Sign up

Export Citation Format

Share Document