scholarly journals Online Bridged Pruning for Real-Time Search with Arbitrary Lookaheads

Author(s):  
Carlos Hernandez ◽  
Adi Botea ◽  
Jorge A. Baier ◽  
Vadim Bulitko

Real-time search algorithms are relevant to time-sensitive decision-making domains such as video games and robotics. In such settings, the agent is required to decide on each action under a constant time bound, regardless of the search space size. Despite recent progress, poor-quality solutions can be produced mainly due to state re-visitation. Different techniques have been developed to reduce such a re-visitation with state pruning showing promise. In this paper, we propose a novel pruning approach applicable to the wide class of real-time search algorithms. Given a local search space of arbitrary size, our technique aggressively prunes away all states in its interior, possibly adding new edges to maintain the connectivity of the search space frontier. An experimental evaluation shows that our pruning often improves the performance of a base real-time search algorithm by over an order of magnitude. This allows our implemented system to outperform state-of-the-art real-time search algorithms used in the evaluation.

2012 ◽  
Vol 43 ◽  
pp. 523-570 ◽  
Author(s):  
C. Hernandez ◽  
J. A. Baier

Heuristics used for solving hard real-time search problems have regions with depressions. Such regions are bounded areas of the search space in which the heuristic function is inaccurate compared to the actual cost to reach a solution. Early real-time search algorithms, like LRTA*, easily become trapped in those regions since the heuristic values of their states may need to be updated multiple times, which results in costly solutions. State-of-the-art real-time search algorithms, like LSS-LRTA* or LRTA*(k), improve LRTA*'s mechanism to update the heuristic, resulting in improved performance. Those algorithms, however, do not guide search towards avoiding depressed regions. This paper presents depression avoidance, a simple real-time search principle to guide search towards avoiding states that have been marked as part of a heuristic depression. We propose two ways in which depression avoidance can be implemented: mark-and-avoid and move-to-border. We implement these strategies on top of LSS-LRTA* and RTAA*, producing 4 new real-time heuristic search algorithms: aLSS-LRTA*, daLSS-LRTA*, aRTAA*, and daRTAA*. When the objective is to find a single solution by running the real-time search algorithm once, we show that daLSS-LRTA* and daRTAA* outperform their predecessors sometimes by one order of magnitude. Of the four new algorithms, daRTAA* produces the best solutions given a fixed deadline on the average time allowed per planning episode. We prove all our algorithms have good theoretical properties: in finite search spaces, they find a solution if one exists, and converge to an optimal after a number of trials.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 407 ◽  
Author(s):  
Dominik Weikert ◽  
Sebastian Mai ◽  
Sanaz Mostaghim

In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search space in which the contour is to be found. However, for real-world applications this would require a complete knowledge about the search space, which may not be always feasible or possible. The proposed algorithm removes this requirement and is only based on the local information of the particles to accurately identify a contour. Particles search for the contour of an object and then traverse alongside using their known information about positions in- and out-side of the object. Our experiments show that the proposed PSCS algorithm can deliver comparable results as the state-of-the-art.


2020 ◽  
Vol 34 (06) ◽  
pp. 9827-9834
Author(s):  
Maximilian Fickert ◽  
Tianyi Gu ◽  
Leonhard Staut ◽  
Wheeler Ruml ◽  
Joerg Hoffmann ◽  
...  

Suboptimal heuristic search algorithms can benefit from reasoning about heuristic error, especially in a real-time setting where there is not enough time to search all the way to a goal. However, current reasoning methods implicitly or explicitly incorporate assumptions about the cost-to-go function. We consider a recent real-time search algorithm, called Nancy, that manipulates explicit beliefs about the cost-to-go. The original presentation of Nancy assumed that these beliefs are Gaussian, with parameters following a certain form. In this paper, we explore how to replace these assumptions with actual data. We develop a data-driven variant of Nancy, DDNancy, that bases its beliefs on heuristic performance statistics from the same domain. We extend Nancy and DDNancy with the notion of persistence and prove their completeness. Experimental results show that DDNancy can perform well in domains in which the original assumption-based Nancy performs poorly.


2018 ◽  
Author(s):  
Hao Chi ◽  
Chao Liu ◽  
Hao Yang ◽  
Wen-Feng Zeng ◽  
Long Wu ◽  
...  

ABSTRACTShotgun proteomics has grown rapidly in recent decades, but a large fraction of tandem mass spectrometry (MS/MS) data in shotgun proteomics are not successfully identified. We have developed a novel database search algorithm, Open-pFind, to efficiently identify peptides even in an ultra-large search space which takes into account unexpected modifications, amino acid mutations, semi- or non-specific digestion and co-eluting peptides. Tested on two metabolically labeled MS/MS datasets, Open-pFind reported 50.5‒117.0% more peptide-spectrum matches (PSMs) than the seven other advanced algorithms. More importantly, the Open-pFind results were more credible judged by the verification experiments using stable isotopic labeling. Tested on four additional large-scale datasets, 70‒85% of the spectra were confidently identified, and high-quality spectra were nearly completely interpreted by Open-pFind. Further, Open-pFind was over 40 times faster than the other three open search algorithms and 2‒3 times faster than three restricted search algorithms. Re-analysis of an entire human proteome dataset consisting of ∼25 million spectra using Open-pFind identified a total of 14,064 proteins encoded by 12,723 genes by requiring at least two uniquely identified peptides. In this search results, Open-pFind also excelled in an independent test for false positives based on the presence or absence of olfactory receptors. Thus, a practical use of the open search strategy has been realized by Open-pFind for the truly global-scale proteomics experiments of today and in the future.


Author(s):  
Jielu Yan ◽  
MingLiang Zhou ◽  
Jinli Pan ◽  
Meng Yin ◽  
Bin Fang

3D human pose estimation describes estimating 3D articulation structure of a person from an image or a video. The technology has massive potential because it can enable tracking people and analyzing motion in real time. Recently, much research has been conducted to optimize human pose estimation, but few works have focused on reviewing 3D human pose estimation. In this paper, we offer a comprehensive survey of the state-of-the-art methods for 3D human pose estimation, referred to as pose estimation solutions, implementations on images or videos that contain different numbers of people and advanced 3D human pose estimation techniques. Furthermore, different kinds of algorithms are further subdivided into sub-categories and compared in light of different methodologies. To the best of our knowledge, this is the first such comprehensive survey of the recent progress of 3D human pose estimation and will hopefully facilitate the completion, refinement and applications of 3D human pose estimation.


2018 ◽  
Vol 16 (1) ◽  
pp. 1582-1606 ◽  
Author(s):  
Shahram Golzari ◽  
Mohammad Nourmohammadi Zardehsavar ◽  
Amin Mousavi ◽  
Mahmoud Reza Saybani ◽  
Abdullah Khalili ◽  
...  

AbstractGravitational Search Algorithm (GSA) is a metaheuristic for solving unimodal problems. In this paper, a K-means based GSA (KGSA) for multimodal optimization is proposed. This algorithm incorporates K-means and a new elitism strategy called “loop in loop” into the GSA. First in KGSA, the members of the initial population are clustered by K-means. Afterwards, new population is created and divided in different niches (or clusters) to expand the search space. The “loop in loop” technique guides the members of each niche to the optimum direction according to their clusters. This means that lighter members move faster towards the optimum direction of each cluster than the heavier members. For evaluations, KGSA is benchmarked on well-known functions and is compared with some of the state-of-the-art algorithms. Experiments show that KGSA provides better results than the other algorithms in finding local and global optima of constrained and unconstrained multimodal functions.


2020 ◽  
Vol 34 (05) ◽  
pp. 9242-9249
Author(s):  
Yujing Wang ◽  
Yaming Yang ◽  
Yiren Chen ◽  
Jing Bai ◽  
Ce Zhang ◽  
...  

Learning text representation is crucial for text classification and other language related tasks. There are a diverse set of text representation networks in the literature, and how to find the optimal one is a non-trivial problem. Recently, the emerging Neural Architecture Search (NAS) techniques have demonstrated good potential to solve the problem. Nevertheless, most of the existing works of NAS focus on the search algorithms and pay little attention to the search space. In this paper, we argue that the search space is also an important human prior to the success of NAS in different applications. Thus, we propose a novel search space tailored for text representation. Through automatic search, the discovered network architecture outperforms state-of-the-art models on various public datasets on text classification and natural language inference tasks. Furthermore, some of the design principles found in the automatic network agree well with human intuition.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 230
Author(s):  
Majid Almarashi ◽  
Wael Deabes ◽  
Hesham H. Amin ◽  
Abdel-Rahman Hedar

Simulated annealing is a well-known search algorithm used with success history in many search problems. However, the random walk of the simulated annealing does not benefit from the memory of visited states, causing excessive random search with no diversification history. Unlike memory-based search algorithms such as the tabu search, the search in simulated annealing is dependent on the choice of the initial temperature to explore the search space, which has little indications of how much exploration has been carried out. The lack of exploration eye can affect the quality of the found solutions while the nature of the search in simulated annealing is mainly local. In this work, a methodology of two phases using an automatic diversification and intensification based on memory and sensing tools is proposed. The proposed method is called Simulated Annealing with Exploratory Sensing. The computational experiments show the efficiency of the proposed method in ensuring a good exploration while finding good solutions within a similar number of iterations.


2022 ◽  
Vol 73 ◽  
Author(s):  
Maximilian Fickert ◽  
Jörg Hoffmann

In classical AI planning, heuristic functions typically base their estimates on a relaxation of the input task. Such relaxations can be more or less precise, and many heuristic functions have a refinement procedure that can be iteratively applied until the desired degree of precision is reached. Traditionally, such refinement is performed offline to instantiate the heuristic for the search. However, a natural idea is to perform such refinement online instead, in situations where the heuristic is not sufficiently accurate. We introduce several online-refinement search algorithms, based on hill-climbing and greedy best-first search. Our hill-climbing algorithms perform a bounded lookahead, proceeding to a state with lower heuristic value than the root state of the lookahead if such a state exists, or refining the heuristic otherwise to remove such a local minimum from the search space surface. These algorithms are complete if the refinement procedure satisfies a suitable convergence property. We transfer the idea of bounded lookaheads to greedy best-first search with a lightweight lookahead after each expansion, serving both as a method to boost search progress and to detect when the heuristic is inaccurate, identifying an opportunity for online refinement. We evaluate our algorithms with the partial delete relaxation heuristic hCFF, which can be refined by treating additional conjunctions of facts as atomic, and whose refinement operation satisfies the convergence property required for completeness. On both the IPC domains as well as on the recently published Autoscale benchmarks, our online-refinement search algorithms significantly beat state-of-the-art satisficing planners, and are competitive even with complex portfolios.


2011 ◽  
Vol 8 (2) ◽  
pp. 625-629
Author(s):  
Baghdad Science Journal

There are many methods of searching large amount of data to find one particular piece of information. Such as find name of person in record of mobile. Certain methods of organizing data make the search process more efficient the objective of these methods is to find the element with least cost (least time). Binary search algorithm is faster than sequential and other commonly used search algorithms. This research develops binary search algorithm by using new structure called Triple, structure in this structure data are represented as triple. It consists of three locations (1-Top, 2-Left, and 3-Right) Binary search algorithm divide the search interval in half, this process makes the maximum number of comparisons (Average case complexity of Search) is O(log2 n) (pronounce this "big-Oh-n" or "the order of magnitude"), if we search in a list consists of (N) elements. In this research the number of comparison is reduced to triple by using Triple structure, this process makes the maximum number of comparisons is O(log2 (n)/3+1) if we search key in list consist of (N) elements.


Sign in / Sign up

Export Citation Format

Share Document