scholarly journals The Number of Moves of the Largest Disc in Shortest Paths on Hanoi Graphs

10.37236/4252 ◽  
2014 ◽  
Vol 21 (4) ◽  
Author(s):  
Simon Aumann ◽  
Katharina A.M. Götz ◽  
Andreas M. Hinz ◽  
Ciril Petr

In contrast to the widespread interest in the Frame-Stewart conjecture (FSC) about the optimal number of moves in the classical Tower of Hanoi task with more than three pegs, this is the first study of the question of investigating shortest paths in Hanoi graphs $H_p^n$ in a more general setting. Here $p$ stands for the number of pegs and $n$ for the number of discs in the Tower of Hanoi interpretation of these graphs. The analysis depends crucially on the number of largest disc moves (LDMs). The patterns of these LDMs will be coded as binary strings of length $p-1$ assigned to each pair of starting and goal states individually. This will be approached both analytically and numerically. The main theoretical achievement is the existence, at least for all $n\geqslant p(p-2)$, of optimal paths where $p-1$ LDMs are necessary. Numerical results, obtained by an algorithm based on a modified breadth-first search making use of symmetries of the graphs, lead to a couple of conjectures about some cases not covered by our ascertained results. These, in turn, may shed some light on the notoriously open FSC.

Author(s):  
Mark Newman

This chapter introduces some of the fundamental concepts of numerical network calculations. The chapter starts with a discussion of basic concepts of computational complexity and data structures for storing network data, then progresses to the description and analysis of algorithms for a range of network calculations: breadth-first search and its use for calculating shortest paths, shortest distances, components, closeness, and betweenness; Dijkstra's algorithm for shortest paths and distances on weighted networks; and the augmenting path algorithm for calculating maximum flows, minimum cut sets, and independent paths in networks.


2016 ◽  
Vol 45 (2) ◽  
pp. 233-252
Author(s):  
Pepijn Viaene ◽  
Alain De Wulf ◽  
Philippe De Maeyer

Landmarks are ideal wayfinding tools to guide a person from A to B as they allow fast reasoning and efficient communication. However, very few path-finding algorithms start from the availability of landmarks to generate a path. In this paper, which focuses on indoor wayfinding, a landmark-based path-finding algorithm is presented in which the endpoint partition is proposed as spatial model of the environment. In this model, the indoor environment is divided into convex sub-shapes, called e-spaces, that are stable with respect to the visual information provided by a person’s surroundings (e.g. walls, landmarks). The algorithm itself implements a breadth-first search on a graph in which mutually visible e-spaces suited for wayfinding are connected. The results of a case study, in which the calculated paths were compared with their corresponding shortest paths, show that the proposed algorithm is a valuable alternative for Dijkstra’s shortest path algorithm. It is able to calculate a path with a minimal amount of actions that are linked to landmarks, while the path length increase is comparable to the increase observed when applying other path algorithms that adhere to natural wayfinding behaviour. However, the practicability of the proposed algorithm is highly dependent on the availability of landmarks and on the spatial configuration of the building.


Author(s):  
Paolo Giulio Franciosa ◽  
Daniele Frigioni ◽  
Roberto Giaccio

2019 ◽  
Vol 3 (3) ◽  
pp. 48 ◽  
Author(s):  
Sam Ganzfried ◽  
Farzana Yusuf

In many settings, people must give numerical scores to entities from a small discrete set—for instance, rating physical attractiveness from 1–5 on dating sites, or papers from 1–10 for conference reviewing. We study the problem of understanding when using a different number of options is optimal. We consider the case when scores are uniform random and Gaussian. We study computationally when using 2, 3, 4, 5, and 10 options out of a total of 100 is optimal in these models (though our theoretical analysis is for a more general setting with k choices from n total options as well as a continuous underlying space). One may expect that using more options would always improve performance in this model, but we show that this is not necessarily the case, and that using fewer choices—even just two—can surprisingly be optimal in certain situations. While in theory for this setting it would be optimal to use all 100 options, in practice, this is prohibitive, and it is preferable to utilize a smaller number of options due to humans’ limited computational resources. Our results could have many potential applications, as settings requiring entities to be ranked by humans are ubiquitous. There could also be applications to other fields such as signal or image processing where input values from a large set must be mapped to output values in a smaller set.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Frosso S. Makri ◽  
Zaharias M. Psillakis

The expected number of 0-1 strings of a limited length is a potentially useful index of the behavior of stochastic processes describing the occurrence of critical events (e.g., records, extremes, and exceedances). Such model sequences might be derived by a Hoppe-Polya or a Polya-Eggenberger urn model interpreting the drawings of white balls as occurrences of critical events. Numerical results, concerning average numbers of constrained length interruptions of records as well as how on the average subsequent exceedances are separated, demonstrate further certain urn models.


2015 ◽  
Vol 25 (3) ◽  
pp. 577-596 ◽  
Author(s):  
Pedro A. Góngora ◽  
David A. Rosenblueth

AbstractConsider games where players wish to minimize the cost to reach some state. A subgame-perfect Nash equilibrium can be regarded as a collection of optimal paths on such games. Similarly, the well-known state-labeling algorithm used in model checking can be viewed as computing optimal paths on a Kripke structure, where each path has a minimum number of transitions. We exploit these similarities in a common generalization of extensive games and Kripke structures that we name “graph games”. By extending the Bellman-Ford algorithm for computing shortest paths, we obtain a model-checking algorithm for graph games with respect to formulas in an appropriate logic. Hence, when given a certain formula, our model-checking algorithm computes the subgame-perfect Nash equilibrium (as opposed to simply determining whether or not a given collection of paths is a Nash equilibrium). Next, we develop a symbolic version of our model checker allowing us to handle larger graph games. We illustrate our formalism on the critical-path method as well as games with perfect information. Finally, we report on the execution time of benchmarks of an implementation of our algorithms


This paper is about implementing pacman game with AI.The Game Pac-Man is a very challenging video game that can be useful in conducting AI(Artificial Intelligence) research. Here,the reason we have implemented various AI algorithms for pacman game is that it helps us to study AI by using visualizations through which we can understand AI more ef- fectively.The main aim is to build an intelligent pacman agent which finds optimal paths through the maze to find a particular goal such as a particular food position,escaping from ghosts.For that, we have implemented AI search algorithms like Depth first search,Breadth first search,A*search,Uniform cost search.We have also implemented multi-agents like Reflex agent,Minimax agent,Alpha-beta agent.Through these multiagent algorithms,we can make pacman to react from its environmental conditions and escape from ghosts to get high score.We have also done the visualization part of the above AI algorithms by which anyone can learn and understand AI algorithms easily.For visualisation of algorithms,we have used python libraries matplotlib and Networkx.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Xiangdong Yin ◽  
Jie Yang

The connecting of things to the Internet makes it possible for smart things to access all kinds of Web services. However, smart things are energy-limited, and suitable selection of Web services will consume less resources. In this paper, we study the problem of selecting some Web service from the candidate set. We formulate this selection of Web services for smart things as single-source many-target shortest path problem. We design algorithms based on the Dijkstra and breadth-first search algorithms, propose an efficient pruning algorithm for breadth-first search, and analyze their performance of number of iterations andI/Ocost. Our empirical evaluation on real-life graphs shows that our pruning algorithm is more efficient than the breadth-first search algorithm.


Sign in / Sign up

Export Citation Format

Share Document