scholarly journals Empirical study prove that breadth-first search is more effective memory usage than depth-first search in frontier boundary cyclic graph

Author(s):  
Al Refai Mohammed N. ◽  
Jamhawi Zeyad

<p><span id="docs-internal-guid-06e4528a-7fff-0e38-150e-f136d6f22d84"><span>Memory consumption, of opened and closed lists in graph searching algorithms, affect in finding the solution. Using frontier boundary will reduce the memory usage for a closed list, and improve graph size expansion. The blind algorithms, depth-first frontier Searches, and breadth-first frontier Searches were used to compare the memory usage in slide tile puzzles as an example of the cyclic graph. This paper aims to prove that breadth-first frontier search is better than depth-first frontier search in memory usage. Both opened and closed lists in the cyclic graph are used. The level number and nodes count at each level for slide tile puzzles are changed when starting from different empty tile location. Eventually, the unorganized spiral path in depth-first search appears clearly through moving inside the graph to find goals.</span></span></p>

2021 ◽  
Author(s):  
Matjaž Krnc ◽  
Nevena Pivač

Graph searching is one of the simplest and most widely used tools in graph algorithms. Every graph search method is defined using some partic-ular selection rule, and the analysis of the corre-sponding vertex orderings can aid greatly in de-vising algorithms, writing proofs of correctness, or recognition of various graph families. We study graphs where the sets of vertex order-ings produced by two di˙erent search methods coincide. We characterise such graph families for ten pairs from the best-known set of graph searches: Breadth First Search (BFS), Depth First Search (DFS), Lexicographic Breadth First Search (LexBFS) and Lexicographic Depth First Search (LexDFS), and Maximal Neighborhood Search (MNS).


Author(s):  
Tudor Bălănescu ◽  
Radu Nicolescu ◽  
Huiling Wu

In this paper, the authors propose a new approach to fully asynchronous P systems, and a matching complexity measure, both inspired from the field of distributed algorithms. The authors validate the proposed approach by implementing several well-known distributed depth-first search (DFS) and breadth-first search (BFS) algorithms. Empirical results show that the proposed P algorithms have shorter descriptions and achieve a performance comparable to the corresponding distributed algorithms.


2020 ◽  
Author(s):  
Jordan M. Eizenga ◽  
Adam M. Novak ◽  
Emily Kobayashi ◽  
Flavia Villani ◽  
Cecilia Cisar ◽  
...  

AbstractMotivationPangenomics is a growing field within computational genomics. Many pangenomic analyses use bidirected sequence graphs as their core data model. However, implementing and correctly using this data model can be difficult, and the scale of pangenomic data sets can be challenging to work at. These challenges have impeded progress in this field.ResultsHere we present a stack of two C++ libraries, libbdsg and libhandlegraph, which use a simple, field-proven interface, designed to expose elementary features of these graphs while preventing common graph manipulation mistakes. The libraries also provide a Python binding. Using a diverse collection of pangenome graphs, we demonstrate that these tools allow for efficient construction and manipulation of large genome graphs with dense variation. For instance, the speed and memory usage is up to an order of magnitude better than the prior graph implementation in the vg toolkit, which has now transitioned to using libbdsg’s implementations.Availabilitylibhandlegraph and libbdsg are available under an MIT License from https://github.com/vgteam/libhandlegraph and https://github.com/vgteam/[email protected]


1990 ◽  
Author(s):  
Σαράντος Καπιδάκης

Υπολογίζουμε την προσδοκόμενη τιμή διαφόρων ποσοτήτων για μια ποικιλία από μεθόδους ψαξίματος σε γράφους, όπως στο depth-first search και στο breadth-first search. Η ανάλυσή μας εφαρμόζεται σε κατευθυνόμενους και μη κατευθυνόμενους τυχαίους γράφους και καλύπτει την περιοχή των ενδιαφέρουσων πυκνοτήτων των γράφων, περιλαμβάνοντας πυκνότητες στις οποίες ο τυχαίος γράφος αποτελείται από περισσότερες από μία συνιστώσες και έχει μία γιγαντιαία συνιστώσα. Υπολογίζουμε τον αριθμό ακμών που εξετάζεται κατά τη διάρκεια του ψαξίματος, καθώς αυτός ο αριθμός είναι ανάλογος με το χρόνο τρεξίματος του αλγορίθμου. Βρίσκουμε πως για γράφους που μόλις είναι ενωμένοι, μπορεί να εξετάσουμε όλες τις ακμές, αλλά για πυκνότερους γράφους γενικά χρειάζονται πολύ λιγότερες ακμές. Αποδεικνύουμε πως οποιοσδήποτε αλγόριθμος ψαξίματος εξετάζει Θ(n logn) ακμές, εφόσον υπάρχουν, σε όλους τους τυχαίους γράφους με n κόμβους, αλλά όχι απαραίτητα και στους πλήρεις γράφους. Μία ιδιότητα μερικών αλγορίθμων ψαξίματος είναι το μέγιστο βάθος του ψαξίματος. Στο depth-first search, αυτό το βάθος μπορεί να χρησιμοποιηθεί για να προσδιορίσουμε το χώρο που απαιτείται για τη στίβα της αναδρομής. Για τυχαίους γράφους οποιασδήποτε πυκνότητας, ακόμα και για μη συνδεμένους γράφους, αποδεικνύουμε ότι αυτός ο χώρος είναι Θ(n). Από την άλλη μεριά, το βάθος που φτάνει το breadth-first search είναι Θ(logn/log(pn)), όπου p είναι η πιθανότητα ύπαρξης οποιασδήποτε ακμής. Αποδεικνύουμε ότι το μέγεθος της δομής δεδομένων που απαιτείται από oποιονδήποτε αλγόριθμο ψαξίματος είναι Θ(n). Αν το ψάξιμο τελειώνει μόλις φτάσουμε ένα συγκεκριμένο κόμβο, αποδεικνύουμε ότι oποιοσδήποτε αλγόριθμος ψαξίματος χρειάζεται μια δομή δεδομένων μεγέθους Θ(n), και εξετάζει μόνο Θ(n) ακμές. Τελικά, παράγουμε παρόμοια αποτελέσματα για παραλαγές των παραπάνω αλγορίθμων ψαξίματος, περισσότερο γενικές τάξεις αλγορίθμων ψαξίματος, και για τυχαίους γράφους με πολλαπλές ακμές. Αυτά τα αποτελέσματα επιβεβαιώνονται με εξομοιώσεις. Οι τεχνικές που χρησιμοποιήθηκαν για να κάνουν δυνατή την εξομοίωση μεγάλων γράφων (με μερικά εκατομμύρια κόμβους) και τα αποτελέσματα που βγήκαν είναι γενικού ενδιαφέροντος, ειδικά για αυτούς που εκτελούν παρόμοια πειράματα.


Author(s):  
Rizka Ardiansyah

Departement of Information Technology Tadulako University, currently does not have a digital repository system that can accommodate the work of student either in form of paper, research report, and scientific posters at the moment. Various of student works is stored in form of physical archives which is definitely risky to get damaged, lost, difficult to access by other students. The existence of a digital repository system can also be used as a benchmark media to develop student creativity in producing scientific work. Searching mechanism is one of the crucial parts in a repository system. There are several commonly used algorithms such as Depth-First search and Breath-First search. Each algorithm have deficiencies and advantages. Therefore, to optimize the searching mechanism on the proposed digital repository system, the author purpose using the DepthFirst search and Breath-First search hybrid algorithms. The result of this study are the design of a prototipe repository system that able to manage and store student scientific documents archive digitally. This system also has a fast and accurate data search fiture.


2010 ◽  
Vol 10 (4-6) ◽  
pp. 401-416 ◽  
Author(s):  
PABLO CHICO DE GUZM'AN ◽  
MANUEL CARRO ◽  
DAVID S. WARREN

AbstractOne of the differences among the various approaches to suspension-based tabled evaluation is the scheduling strategy. The two most popular strategies arelocalandbatchedevaluation. The former collects all the solutions to a tabled predicate before making any one of them available outside the tabled computation. The latter returns answers one by one before computing them all, which in principle is better if only one answer (or a subset of the answers) is desired. Batched evaluation is closer to SLD evaluation in that it computes solutions lazily as they are demanded, but it may need arbitrarily more memory than local evaluation, which is able to reclaim memory sooner. Some programs which in practice can be executed under the local strategy quickly run out of memory under batched evaluation. This has led to the general adoption of local evaluation at the expense of the more depth-first batched strategy. In this paper we study the reasons for the high memory consumption of batched evaluation and propose a new scheduling strategy which we have termedswapping evaluation. Swapping evaluation also returns answers one by one before completing a tabled call, but its memory usage can be orders of magnitude less than batched evaluation. An experimental implementation in the XSB system shows that swapping evaluation is a feasible memory-scalable strategy that need not compromise execution speed.


2000 ◽  
Vol 10 (4) ◽  
pp. 397-408 ◽  
Author(s):  
MICHAEL SPIVEY

Every functional programmer knows the technique of “replacing failure by a list of successes” (Wadler, 1985), but wise programmers are aware also of the possibility that the list will be empty or (worse) divergent. In fact, the “lists of successes” technique is equivalent to the incomplete depth-first search strategy used in Prolog.At heart, the idea is quite simple: whenever we might want to use a ‘multi-function’ such as ‘f’ [ratio ][ratio ] α [Rarr ] β that can return many results or none, we replace it by a genuine function f [ratio ][ratio ] α → β stream that returns a lazy stream of results, and rely on lazy evaluation to compute the answers one at a time, and only as they are needed. For the sake of clarity, I will distinguish between the types of finite lists (α list) and of potentially infinite, lazy streams (α stream), though both may be implemented in the same way. Following the conventions used in ML, type constructors follow their argument types.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Xiwei Tang ◽  
Jianxin Wang ◽  
Min Li ◽  
Yiming He ◽  
Yi Pan

Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes.


Author(s):  
Arturo Mascorro ◽  
Francisco Mesa ◽  
Jose Alvarez ◽  
Laura Cruz

ABSTRACTA computational cost comparative study through both Java and C applications was developed. Computational routines consist of a matrix multiplications, the discrete cosine transform and the bubble-sorting algorithm. Memory and Runtime for each application were measure. It was determined that the runtime of matrix multiplication in Java was within the limits of 200 and 300 milliseconds, as opposed to the application developed in C, which shown to be stable with an execution period less than 20 milliseconds. In the ordering algorithm with the bubble method, it was observe that the Java language show be very slow compared to C. In addition, the memory usage was lower in most of the applications, showing a minimum difference. Applications were tested in both, a mobile LG-E510f and a Laptop Toshiba Satellite. The study allowed to report the profit generated in both runtime and memory consumption when performing a native implementation in C and Java.


2015 ◽  
Vol 1 (2) ◽  
pp. 161-167
Author(s):  
Budi Prasetiyo ◽  
Maulidia Rahmah Hidayah

Pada permainan Game Kamen Rider Decade ini sangat membutuhkan strategi yang tepat jika ingin memenangkan dengan mudah permainan ini. Penelitian ini bertujuan untuk mengimplementasikan metode Dept First Search (DFS) dan Breadth First Search (BFS) pada Game Kamen Rider Decade, yang merupakan permainan dengan strategi penyelesaiannya menggunakan metode pencarian buta (blind search). Pengumpulan data dilakukan dengan pendekatan kualitatif dengan metode deskriptif, dimana pengujian dilakukan dengan memainkan 3 kali masing-masing dengan metode selalu BFS dan selalu DFS. Hasil menunjukan peluang lebih besar memenangkan permainan ini adalah dengan strategi selalu BFS. Dimana kemampuan BFS pada permainan ini dapat berguna untuk pertahanan terhadap musuh.


Sign in / Sign up

Export Citation Format

Share Document