blocked algorithms
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 3)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Joshua P Chu ◽  
Caleb Kemere

Recent technological advances have enabled neural recordings consisting of hundreds to thousands of channels. As the pace of these developments continues to grow rapidly, it is imperative to have fast, flexible tools supporting the analysis of neural data gathered by such large scale modalities. Here we introduce ghostipy (general hub of spectral techniques in Python), a Python open source software toolbox implementing various signal processing and spectral analyses including optimal digital filters and time-frequency transforms. ghostipy prioritizes performance and efficiency by using parallelized, blocked algorithms. As a result, it is able to outperform commercial software in both time and space complexity for high channel count data and can handle out-of-core computation in a user-friendly manner. Overall, our software suite reduces frequently encountered bottlenecks in the experimental pipeline, and we believe this toolset will enhance both the portability and scalability of neural data analysis.


Author(s):  
Cristian Ramon-Cortes ◽  
Ramon Amela ◽  
Jorge Ejarque ◽  
Philippe Clauss ◽  
Rosa M. Badia

The last improvements in programming languages and models have focused on simplicity and abstraction; leading Python to the top of the list of the programming languages. However, there is still room for improvement when preventing users from dealing directly with distributed and parallel computing issues. This paper proposes and evaluates AutoParallel, a Python module to automatically find an appropriate task-based parallelisation of affine loop nests and execute them in parallel in a distributed computing infrastructure. It is based on sequential programming and contains one single annotation (in the form of a Python decorator) so that anyone with intermediate-level programming skills can scale up an application to hundreds of cores. The evaluation demonstrates that AutoParallel goes one step further in easing the development of distributed applications. On the one hand, the programmability evaluation highlights the benefits of using a single Python decorator instead of manually annotating each task and its parameters or, even worse, having to develop the parallel code explicitly (e.g., using OpenMP, MPI). On the other hand, the performance evaluation demonstrates that AutoParallel is capable of automatically generating task-based workflows from sequential Python code while achieving the same performances than manually taskified versions of established state-of-the-art algorithms (i.e., Cholesky, LU, and QR decompositions). Finally, AutoParallel is also capable of automatically building data blocks to increase the tasks’ granularity; freeing the user from creating the data chunks, and re-designing the algorithm. For advanced users, we believe that this feature can be useful as a baseline to design blocked algorithms.


2008 ◽  
Vol 48 (3) ◽  
pp. 563-584 ◽  
Author(s):  
B. Kågström ◽  
D. Kressner ◽  
E.S. Quintana-Ortí ◽  
G. Quintana-Ortí

Sign in / Sign up

Export Citation Format

Share Document