Efficient algorithms for some graph problems on a tree-structured parallel computer

1987 ◽  
Vol 62 (4) ◽  
pp. 599-615 ◽  
Author(s):  
PRANAY CHAUDHURI

The emergence of Network Science has motivated a renewed interest in classical graph problems for the analysis of the topology of complex networks. For example, important centrality metrics, such as the betweenness, the stress, the eccentricity, and the closeness centralities, are all based on BFS. On the other hand, the k-core decomposition of graphs defines a hierarchy of internal cores and decomposes large networks layer by layer. The k-core decomposition has been successfully applied in a variety of domains, including large graph visualization and fingerprinting, analysis of large software systems, and fraud detection. In this chapter, the authors review known efficient algorithms for traversing and decomposing large complex networks and provide insights on how the decomposition of graphs in k-cores can be useful for developing novel topology-aware algorithms.


1995 ◽  
Vol 05 (01) ◽  
pp. 37-48 ◽  
Author(s):  
ARNOLD L. ROSENBERG ◽  
VITTORIO SCARANO ◽  
RAMESH K. SITARAMAN

We propose a design for, and investigate the computational power of a dynamically reconfigurable parallel computer that we call the Reconfigurable Ring of Processors ([Formula: see text], for short). The [Formula: see text] is a ring of identical processing elements (PEs) that are interconnected via a flexible multi-line reconfigurable bus, each of whose lines has one-packet width and can be configured, independently of the other lines, to establish an arbitrary PE-to-PE connection. A novel aspect of our design is a communication protocol we call COMET — for Cooperative MEssage Transmission — which allows PEs of an [Formula: see text] to exchange one-packet messages with latency that is logarithmic in the number of PEs the message passes over in transit. The main contribution of this paper is an algorithm that allows an N-PE, N-line [Formula: see text] to simulate an N-PE hypercube executing a normal algorithm, with slowdown less than 4 log log N, provided that the local state of a hypercube PE can be encoded and transmitted using a single packet. This simulation provides a rich class of efficient algorithms for the [Formula: see text], including algorithms for matrix multiplication, sorting, and the Fast Fourer Transform (often using fewer than N buslines). The resulting algorithms for the [Formula: see text] are often within a small constant factor of optimal.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


2017 ◽  
Vol 51 (1) ◽  
pp. 261-266 ◽  
Author(s):  
Édouard Bonnet ◽  
Vangelis Th. Paschos
Keyword(s):  

2018 ◽  
Vol 12 ◽  
pp. 25-41
Author(s):  
Matthew C. FONTAINE

Among the most interesting problems in competitive programming involve maximum flows. However, efficient algorithms for solving these problems are often difficult for students to understand at an intuitive level. One reason for this difficulty may be a lack of suitable metaphors relating these algorithms to concepts that the students already understand. This paper introduces a novel maximum flow algorithm, Tidal Flow, that is designed to be intuitive to undergraduate andpre-university computer science students.


Sign in / Sign up

Export Citation Format

Share Document