scholarly journals Practical Wavelet Tree Construction

2021 ◽  
Vol 26 ◽  
pp. 1-67
Author(s):  
Patrick Dinklage ◽  
Jonas Ellert ◽  
Johannes Fischer ◽  
Florian Kurpicz ◽  
Marvin Löbel

We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings. In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix , a variant that is particularly suited for large alphabets.

2005 ◽  
Vol 18 (2) ◽  
pp. 219-224
Author(s):  
Emina Milovanovic ◽  
Natalija Stojanovic

Because many universities lack the funds to purchase expensive parallel computers, cost effective alternatives are needed to teach students about parallel processing. Free software is available to support the three major paradigms of parallel computing. Parallaxis is a sophisticated SIMD simulator which runs on a variety of platforms.jBACI shared memory simulator supports the MIMD model of computing with a common shared memory. PVM and MPI allow students to treat a network of workstations as a message passing MIMD multicomputer with distributed memory. Each of this software tools can be used in a variety of courses to give students experience with parallel algorithms.


Author(s):  
Wesley Petersen ◽  
Peter Arbenz

In the last few years, courses on parallel computation have been developed and offered in many institutions in the UK, Europe and US as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at ETH Zurich is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines and finally to distributed memory machines. Aimed at advanced undergraduate and graduate students in applied mathematics, computer science and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and in some cases Fortran. This book is also ideal for practitioners and programmers.


2017 ◽  
Vol 5 (1) ◽  
pp. 44-56
Author(s):  
Hsuan-Han Chang ◽  
Kuan-Ting Chen ◽  
Pao-Lien Lai

The balanced hypercube is a variant of the hypercube structure and has desirable properties like connectivity, regularity, and symmetry. The cycle is a popular interconnection topology and has been widely used in distributed-memory parallel computers. Moreover, parallel algorithms of cycles have been extensively developed and used. The problem of how to embed cycles into a host graph has attracted a great attention in recent years. However, there is no systematic method proposed to generate the desired cycles in balanced hypercubes. In this paper, the authors develop systematic linear time algorithm to construct cycles and Hamiltonian cycles for the balanced hypercube.


2020 ◽  
Vol 11 (2) ◽  
pp. 95-102
Author(s):  
I Nyoman Aditya Yudiswara ◽  
Abba Suganda

Processor technology currently tends to increase the number of cores more than increasing the clock speed. This development is very useful and becomes an opportunity to improve the performance of sequential algorithms that are only done by one core. This paper discusses the sorting algorithm that is executed in parallel by several logical CPUs or cores using the openMP library. This algorithm is named QDM Sort which is a combination of sequential quick sort algorithm and double merge algorithm. This study uses a data parallelism approach to design parallel algorithms from sequential algorithms. The data used in this study are the data that have not been sorted and also the data that has been sorted is integer type which is stored in advance in a file. The parameter measured to determine the performance of the QDM Sort algorithm is speedup. In a condition where a large amount of data is above 4096 and the number of threads in QDM Sort is the same as the number of logical CPUs, the QDM Sort algorithm has a better speedup compared to the other parallel sorting algorithms discussed in this study. For small amounts of data it is still better to use sequential sorting algorithm.


1992 ◽  
Vol 45 (3) ◽  
pp. 325 ◽  
Author(s):  
Carl Winstead ◽  
Qiyan Sun ◽  
Paul G Hipes ◽  
Marco AP Lima ◽  
Vincent McKoy

We review recent progress in the study of low-energy collisions between electrons and polyatomic molecules which has resulted from the application of distributed-memory parallel computing to this challenging problem. Recent studies of electronically elastic and inelastic scattering from several molecular systems, including ethene, propene, cyclopropane, and disilane, are presented. We also discuss the potential of ab initio methods combined with cost-effective parallel computation to provide critical data for the modeling of materials-processing plasmas.


Sign in / Sign up

Export Citation Format

Share Document