Path-Cost Bounds for Parameterized Centralized Variants of A* for Static and Certain Environments

2016 ◽  
Vol 25 (04) ◽  
pp. 1650028
Author(s):  
Amol D. Mali ◽  
Minh Tang

A* search and its variants have been used in various fields for solving problems with large search spaces where state transitions occur because of application of operators. The key values in A* search are g(n) and h(n), where g(n) is the cost of the path from root (or start) node to node n, and h(n) is the estimated cost of cheapest path from n to goal. In this paper, we report on a space of variants of A* based on the following ideas: (i) using weighting functions for g(n) and h(n), (ii) evaluating different nodes with different heuristics, (iii) evaluating nodes with computationally cheap heuristics and re-evaluating some nodes with computationally expensive heuristics, and (iv) changing the size of the set of nodes from which the node to be expanded next is selected. We report on the bounds on the costs of solutions found by these variants of A*. We also report on the bounds for meta-variants of A* that invoke these variants sequentially. We show how the results can be used to obtain a more flexible search control without increasing the bound on the cost of the solution found by a variant or a meta-variant.

2002 ◽  
Vol 16 (12) ◽  
pp. 877-879 ◽  
Author(s):  
John K Marshall

The Canadian Coordinating Office for Health Technology Assessment (CCOHTA) published an economic analysis, using a Markov model, of infliximab therapy for Crohn’s disease that is refractory to other treatments. This was the first fully published economic analysis that addresses this treatment option. Health state transitions were based on data from Olmsted County, Minnesota, health state resource profiles were created using expert opinion and a number of assumptions were made when designing the model. The analysis was rigorous, the best available efficacy and safety data were used, state-of-the art sensitivity analyses were undertaken and an ‘acceptability curve‘ was constructed. The model found that infliximab was effective in increasing quality-adjusted life years when offered in a variety of protocols, but it was associated with high incremental cost utility ratios compared with usual care. The results should be interpreted, however, in view of a number of limitations. The time horizon for the analysis was short (one year), because of a lack of longer-term efficacy data, and might have led to an underestimation of the benefits from averting surgery. Because the analysis was performed from the perspective of a Canadian provincial ministry of health, only direct medical costs were considered. Patients with active Crohn’s disease are likely to incur significant indirect costs, which could be mitigated by this medication. The analysis should be updated as new data become available. Moreover, small changes in the cost of the medication could make the treatment cost effective, according to this model. Economic analyses, such as the one undertaken by the CCOHTA, cannot by themselves solve dilemmas in the allocation of limited health care resources, and other considerations must be included when formulating policy. This is especially important for patients with severe Crohn’s disease, who have significant disability and for whom few therapeutic options exist.


Author(s):  
Kazuhiro Ogata

The paper describes how to formally specify three path finding algorithms in Maude, a rewriting logic-based programming/specification language, and how to model check if they enjoy desired properties with the Maude LTL model checker. The three algorithms are Dijkstra Shortest Path Finding Algorithm (DA), A* Algorithm and LPA* Algorithm. One desired property is that the algorithms always find the shortest path. To this end, we use a path finding algorithm (BFS) based on breadth-first search. BFS finds all paths from a start node to a goal node and the set of all shortest paths is extracted. We check if the path found by each algorithm is included in the set of all shortest paths for the property. A* is an extension of DA in that for each node [Formula: see text] an estimation [Formula: see text] of the distance to the goal node from [Formula: see text] is used and LPA* is an incremental version of A*. It is known that if [Formula: see text] is admissible, A* always finds the shortest path. We have found a possible relaxed sufficient condition. The relaxed condition is that there exists the shortest path such that for each node [Formula: see text] except for the start node on the path [Formula: see text] plus the cost to [Formula: see text] from the start node is less than the cost of any non-shortest path to the goal from the start. We informally justify the relaxed condition. For LPA*, if the relaxed condition holds in each updated version of a graph concerned including the initial graph, the shortest path is constructed. Based on the three case studies for DA, A* and LPA*, we summarize the formal specification and model checking techniques used as a generic approach to formal specification and model checking of path finding algorithms.


1986 ◽  
Vol 108 (3) ◽  
pp. 336-339 ◽  
Author(s):  
J. P. Karidis ◽  
S. R. Turns

An algorithm is presented for the efficient constrained or unconstrained minimization of computationally expensive objective functions. The method proceeds by creating and numerically optimizing a sequence of surrogate functions which are chosen to approximate the behavior of the unknown objective function in parameter-space. The Recursive Surrogate Optimization (RSO) technique is intended for design applications where the computational cost required to evaluate the objective function greatly exceeds both the cost of evaluating any domain constraints present and the cost associated with one iteration of a typical optimization routine. Efficient optimization is achieved by reducing the number of times that the objective function must be evaluated at the expense of additional complexity and computational cost associated with the optimization procedure itself. Comparisons of the RSO performance on eight widely used test problems to published performance data for other efficient techniques demonstrate the utility of the method.


2010 ◽  
Vol 159 ◽  
pp. 412-419
Author(s):  
Li Du ◽  
Xiao Jing Wang ◽  
Li Lin ◽  
Han Yuan Zhang

This paper investigates the defects that exist in the blocking probability and the cost of establishing light-tree in the multicast routing algorithm with sparse splitting capability constraint based on the Virtual Source (VS). It proposes a new multicast routing algorithm N_VSBA (New Virtual Source Based Algorithm). By introducing the minimum interference routing algorithm in the path calculation among the VS nodes, this new algorithm solves the problem of blocking caused by establishing the light-tree through a pre-computed path when two VS nodes concentrate. This algorithm also solves the problem of VS-based algorithm’s not being able to utilize the TaC (Tap and Continue) capability of indecomposable optical nodes by introduction of MPH (Minimum Path-cost Heuristic) algorithm into the process of adding light-tree into indecomposable nodes. Simulation results show that the new algorithm has improvements of different degrees in aspects of blocking probability, average number of wavelength links and average maximum number of wavelength.


2014 ◽  
Vol 02 (01) ◽  
pp. 19-38 ◽  
Author(s):  
Matthew S. Cons ◽  
Tal Shima ◽  
Carmel Domshlak

This paper investigates the problem where a fixed-winged unmanned aerial vehicle is required to find the shortest flyable path to traverse over multiple targets. The unmanned aerial vehicle is modeled as a Dubins vehicle: a vehicle with a minimum turn radius and the inability to go backward. This problem is called the Dubins traveling salesman problem, an extension of the well-known traveling salesman problem. We propose and compare different algorithms that integrate the task planning and the motion planning aspects of the problem, rather than treating the two separately. An upper bound on calculating kinematic satisfying paths for setting costs in the search algorithm is investigated. The proposed integrated algorithms are compared to hierarchical algorithms that solve the search aspect first and then solve the motion planning aspect second. Monte Carlo simulations are performed for a range of vehicle turn radii. The simulations results show the viability of the integrated approach and that using two plausible kinematic satisfying paths as an upper bound to determine the cost-so-far into a search algorithm generally improves performance in terms of the shortest path cost and search complexity.


Author(s):  
Byunggook Na ◽  
Jisoo Mok ◽  
Hyeokjun Choe ◽  
Sungroh Yoon

Despite the increasing interest in neural architecture search (NAS), the significant computational cost of NAS is a hindrance to researchers. Hence, we propose to reduce the cost of NAS using proxy data, i.e., a representative subset of the target data, without sacrificing search performance. Even though data selection has been used across various fields, our evaluation of existing selection methods for NAS algorithms offered by NAS-Bench-1shot1 reveals that they are not always appropriate for NAS and a new selection method is necessary. By analyzing proxy data constructed using various selection methods through data entropy, we propose a novel proxy data selection method tailored for NAS. To empirically demonstrate the effectiveness, we conduct thorough experiments across diverse datasets, search spaces, and NAS algorithms. Consequently, NAS algorithms with the proposed selection discover architectures that are competitive with those obtained using the entire dataset. It significantly reduces the search cost: executing DARTS with the proposed selection requires only 40 minutes on CIFAR-10 and 7.5 hours on ImageNet with a single GPU. Additionally, when the architecture searched on ImageNet using the proposed selection is inversely transferred to CIFAR-10, a state-of-the-art test error of 2.4% is yielded. Our code is available at https://github.com/nabk89/NAS-with-Proxy-data.


2013 ◽  
Vol 91 (9) ◽  
pp. 682-688
Author(s):  
Jack C. Straton

A computer program to calculate the amplitude for general state-to-state transitions in hydrogen from the analytical result has the potential for significant numerical round-off errors whenever the sum of the angular momenta of the two states is greater than four and the projectile’s impact parameter is less than the ratio of its energy to velocity. This arises from high-order cancellation of terms in the MacDonald functions in that analytical result, whose polynomial portions have been found to obey a new multiplication theorem. The cost for correcting this instability is the replacement of a finite series of MacDonald functions by an infinite series via the multiplication theorem for the full MacDonald function.


Author(s):  
D. Frommholz

Abstract. This paper describes an efficient implementation of the semi-global matching (SGM) algorithm on multi-core processors that allows a nearly arbitrary number of path directions for the cost aggregation stage. The scanlines for each orientation are discretized iteratively once, and the regular substructures of the obtained template are reused and shifted to concurrently sum up the path cost in at most two sweeps per direction over the disparity space image. Since path overlaps do not occur at any time, no expensive thread synchronization will be needed. To further reduce the runtime on high counts of path directions, pixel-wise disparity gating is applied, and both the cost function and disparity loop of SGM are optimized using current single instruction multiple data (SIMD) intrinsics for two major CPU architectures. Performance evaluation of the proposed implementation on synthetic ground truth reveals a reduced height error if the number of aggregation directions is significantly increased or when the paths start with an angular offset. Overall runtime shows a speedup that is nearly linear to the number of available processors.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 70
Author(s):  
Sayed A. Mohsin ◽  
Ahmed Younes ◽  
Saad M. Darwish

A distributed database model can be effectively optimized through using query optimization. In such a model, the optimizer attempts to identify the most efficient join order, which minimizes the overall cost of the query plan. Successful query processing largely relies on the methodology implemented by the query optimizer. Many researches are concerned with the fact that query processing is considered an NP-hard problem especially when the query becomes bigger. Regarding large queries, it has been found that heuristic methods cannot cover all search spaces and may lead to falling in a local minimum. This paper examines how quantum-inspired ant colony algorithm, a hybrid strategy of probabilistic algorithms, can be devised to improve the cost of query joins in distributed databases. Quantum computing has the ability to diversify and expand, and thus covering large query search spaces. This enables the selection of the best trails, which speeds up convergence and helps avoid falling into a local optimum. With such a strategy, the algorithm aims to identify an optimal join order to reduce the total execution time. Experimental results show that the proposed quantum-inspired ant colony offers a faster convergence with better outcome when compared with the classic model.


Author(s):  
James F. Mancuso

IBM PC compatible computers are widely used in microscopy for applications ranging from control to image acquisition and analysis. The choice of IBM-PC based systems over competing computer platforms can be based on technical merit alone or on a number of factors relating to economics, availability of peripherals, management dictum, or simple personal preference.IBM-PC got a strong “head start” by first dominating clerical, document processing and financial applications. The use of these computers spilled into the laboratory where the DOS based IBM-PC replaced mini-computers. Compared to minicomputer, the PC provided a more for cost-effective platform for applications in numerical analysis, engineering and design, instrument control, image acquisition and image processing. In addition, the sitewide use of a common PC platform could reduce the cost of training and support services relative to cases where many different computer platforms were used. This could be especially true for the microscopists who must use computers in both the laboratory and the office.


Sign in / Sign up

Export Citation Format

Share Document