PARALLEL ALGORITHMS FOR LONGEST INCREASING CHAINS IN THE PLANE AND RELATED PROBLEMS

1999 ◽  
Vol 09 (04) ◽  
pp. 511-520 ◽  
Author(s):  
MIKHAIL J. ATALLAH ◽  
DANNY Z. CHEN ◽  
KEVIN S. KLENK

Given a set [Formula: see text] of n points in the plane such that each point in [Formula: see text] is asscociated with a nonnegative weight, we consider the problem of computing the single-source longest increasing chains among the points in [Formula: see text] This problem is a generalization of the plannar maximal layers problem. In this paper, we present a parallel algorithm that computes the single-source longest incresing chains in the plane in [Formula: see text] time using [Formula: see text] processors in the CREW PRAM computational model. We also solve a related problem of computing the all-pairs longest paths in an n-node weighted planar st-graph, in [Formula: see text] time using [Formula: see text] CREW PRAM processors. Both of our parallel algorithms are improvement over the previously best known results.

1995 ◽  
Vol 05 (01n02) ◽  
pp. 93-124 ◽  
Author(s):  
DANNY Z. CHEN

The problem of detecting the weak visibility of an n-vertex simple polygon P is that of finding whether P is weakly visible from one of its edges and (if it is) identifying every edge from which P is weakly visible. In this paper, we present an optimal parallel algorithm for solving this problem. Our algorithm runs in O(log n) time using O(n/log n) processors in the CREW PRAM computational model, and is very different from the sequential algorithms for this problem. Based on this algorithm, several other problems related to weak visibility can be optimally solved in parallel.


1998 ◽  
Vol 08 (03) ◽  
pp. 277-304
Author(s):  
Danny Z. Chen

The problem of determining the weak visibility of an n-vertex simple polygon P from an edge e of P is that of deciding whether every point in P is weakly visible from e. In this paper we present an optimal parallel algorithm for solving this problem. Our algorithm runs in O( log n) time using O(n/ log n) processors in the CREW PRAM computational model, and is very different from the sequential algorithms for this problem. We also show how to solve optimally, in parallel, several other problems that are related to the weak visibility of simple polygons.


1998 ◽  
Vol 08 (01) ◽  
pp. 19-28 ◽  
Author(s):  
Vincent Vajnovszki ◽  
Jean Pallo

We present two cost-optimal parallel algorithms generating the set of all well-formed parentheses strings of length 2n with constant delay for each generated string. In our first algorithm we generate in lexicographic order well-formed parentheses strings represented by bitstrings, and in the second one we use the representation by weight sequences. In both cases the computational model is based on an architecture CREW PRAM, where each processor performs the same algorithm simultaneously on a different set of data. Different processors can access the shared memory at the same time to read different data in the same or different memory locations, but no two processors are allowed to write into the same memory location simultaneously. These results complete a recent parallel generating algorithm for well-formed parentheses strings in a linear array of processors model, due to Akl and Stojmenović.


2002 ◽  
Vol 12 (01) ◽  
pp. 51-64 ◽  
Author(s):  
B. S. PANDA ◽  
VIJAY NATARAJAN ◽  
SAJAL K. DAS

In this paper we propose a parallel algorithm to construct a one-sided monotone polygon from a Hamiltonian 2-separator chordal graph. The algorithm requires O( log n) time and O(n) processors on the CREW PRAM model, where n is the number of vertices and m is the number of edges in the graph. We also propose parallel algorithms to recognize Hamiltonian 2-separator chordal graphs and to construct a Hamiltonian cycle in such a graph. They run in O( log 2 n) time using O(mn) processors on the CRCW PRAM model and O( log 2 n) time using O(m) processors on the CREW PRAM model, respectively.


Biosystems ◽  
2015 ◽  
Vol 131 ◽  
pp. 22-29 ◽  
Author(s):  
Zhaocai Wang ◽  
Dongmei Huang ◽  
Jian Tan ◽  
Taigang Liu ◽  
Kai Zhao ◽  
...  

Author(s):  
BHASKARA REDDY MOOLE ◽  
MARCO VALTORTA

This paper presents a new sequential algorithm to answer the question about the existence of a causal explanation for a set of independence statements (a dependency model), which is consistent with a given set of background knowledge. Emphasis is placed on generality, efficiency and ease of parallelization of the algorithm. From this sequential algorithm, an efficient, scalable, and easy to implement parallel algorithm with very little inter-processor communication is derived.


2006 ◽  
Vol 16 (04) ◽  
pp. 429-440 ◽  
Author(s):  
PRASANTA K. JANA ◽  
BHABANI P. SINHA

Wang and Sahni [4] reported two parallel algorithms for N-point prefix computation on an N-processor OTIS-Mesh optoelectronic computer. The overall time complexity for both SIMS and MIMD models of their first algorithm was shown to be (8 N1/4 - 1) electronic moves and 2 OTIS moves. This was further reduced to (7 N1/4 - 1) electronic moves and 2 OTIS moves in their second algorithm. We present here an improved parallel algorithm for N-point prefix computation on an N-processor OTIS-Mesh, which needs (5.5 N1/4 + 3) electronic moves and 2 OTIS moves. Our algorithm is based on the general theme of parallel prefix algorithm proposed in [4] but following the data distribution and local prefix computation similar to that of [1].


2000 ◽  
Vol 10 (04) ◽  
pp. 315-326
Author(s):  
CHRISTOS KAKLAMANIS ◽  
CHARALAMPOS KONSTANTOPOULOS ◽  
ANDREAS SVOLOS

Dictionary compression belongs to the class of lossless compression methods and is mainly used for compressing text files. The most known examples of this technique are the algorithms of the LZ coding family whose common feature is the use of an adaptive dictionary which is dynamically adjusting during the algorithm execution. In this paper, we present a parallel algorithm for one of these coding algorithms, namely the LZ77 coding algorithm also known as a sliding-window coding algorithm. We also present a parallel algorithm for the corresponding LZ77 decoding algorithm. Although there exist PRAM algorithms for various dictionary compression methods, their rather irregular structure has discouraged their implementation on practical interconnection networks such as the mesh and hypercube. However in the case of LZ77 coding/decoding, we show how to exploit the specific properties of the algorithm in order to achieve an efficient implementation on the hypercube. Specifically, we show how to encode a N-character string on a N-node hypercube in only O( log 2N) time. In contrast, a naive simulation of a PRAM algorithm of the LZ77 coding on the hypercube would have O( log 3N) complexity. In addition, we further enhance the performance of our parallel algorithms by using some known heuristics from the field of text compression.


Sign in / Sign up

Export Citation Format

Share Document