suffix arrays
Recently Published Documents


TOTAL DOCUMENTS

148
(FIVE YEARS 13)

H-INDEX

20
(FIVE YEARS 1)

2021 ◽  
Vol 6 ◽  
pp. 13-26
Author(s):  
Alexander Mitsa ◽  
◽  
Petr Stetsyuk ◽  
Alexander Levchuk ◽  
Vasily Petsko ◽  
...  

Five ways to speed up the multidimensional search in order to solve the problem of synthesis of multilayer optical coatings by using the methods of zero and first orders have been considered. The first way is to use an analytical derivative for the target quality function of the multilayer coating. It allows us to calculate accurately (within the computer arithmetic) the value of the gradient of a smooth objective function and generalized gradient of a non-smooth objective one. The first way requires the same number of arithmetic operations as well as finite-difference methods of calculating the gradient and the generalized gradient. The second way is to use a speedy finding of the objective function gradient using the prefix- and suffix-arrays in the analytical method of calculating the gradient. This technique allows us to reduce the number of arithmetic operations thrice for large-scale problems. The third way is the use of tabulating the values of trigonometric functions to calculate the characteristic matrices. This technique reduces the execution time of multiplication operations of characteristic matrices ten times depending on the computer’s specifications. For some computer architectures, this advantage is more than 140 times. The fourth method is the use of the golden section method for the one-dimensional optimization in the problems of synthesis of optical coatings. In particular, when solving one partial problem it is shown that the ternary search method requires approximately 40% more time than the golden section method. The fifth way is to use the effective implementation of multiplication of two matrices. It lies in changing the order of the second and third cycles for the well-known method of multiplying two matrices and fixing in a common variable value of the element of the first matrix. This allows us to speed up significantly the multiplication operation of two matrices. For matrices having 1000 x 1000 dimension the acceleration is from 2 to 15 times, depending on the computer's specifications.


2021 ◽  
Vol 11 (2) ◽  
pp. 283-302
Author(s):  
Paul Meurer

I describe several new efficient algorithms for querying large annotated corpora. The search algorithms as they are implemented in several popular corpus search engines are less than optimal in two respects: regular expression string matching in the lexicon is done in linear time, and regular expressions over corpus positions are evaluated starting in those corpus positions that match the constraints of the initial edges of the corresponding network. To address these shortcomings, I have developed an algorithm for regular expression matching on suffix arrays that allows fast lexicon lookup, and a technique for running finite state automata from edges with lowest corpus counts. The implementation of the lexicon as suffix array also lends itself to an elegant and efficient treatment of multi-valued and set-valued attributes. The described techniques have been implemented in a fully functional corpus management system and are also used in a treebank query system.


2020 ◽  
Vol 15 (1) ◽  
Author(s):  
Felipe A. Louza ◽  
Guilherme P. Telles ◽  
Simon Gog ◽  
Nicola Prezza ◽  
Giovanna Rosone

Abstract Background The construction of a suffix array for a collection of strings is a fundamental task in Bioinformatics and in many other applications that process strings. Related data structures, as the Longest Common Prefix array, the Burrows–Wheeler transform, and the document array, are often needed to accompany the suffix array to efficiently solve a wide variety of problems. While several algorithms have been proposed to construct the suffix array for a single string, less emphasis has been put on algorithms to construct suffix arrays for string collections. Result In this paper we introduce , an open source software for constructing the suffix array and related data indexing structures for a string collection with N symbols in O(N) time. Our tool is written in and is based on the algorithm gSACA-K (Louza et al. in Theor Comput Sci 678:22–39, 2017), the fastest algorithm to construct suffix arrays for string collections. The tool supports large fasta, fastq and text files with multiple strings as input. Experiments have shown very good performance on different types of strings. Conclusions is a fast, portable, and lightweight tool for constructing the suffix array and additional data structures for string collections.


2020 ◽  
Author(s):  
Ekaterina Benza ◽  
Shmuel T Klein ◽  
Dana Shapira

Abstract An alternative to compressed suffix arrays is introduced, based on representing a sequence of integers using Fibonacci encodings, thereby reducing the space requirements of state-of-the-art implementations of the suffix array, while retaining the searching functionalities. Empirical tests support the theoretical space complexity improvements and show that there is no deterioration in the processing times.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Izaak Coleman ◽  
Giacomo Corleone ◽  
James Arram ◽  
Ho-Cheung Ng ◽  
Luca Magnani ◽  
...  
Keyword(s):  

Author(s):  
Barış Ekim ◽  
Bonnie Berger ◽  
Yaron Orenstein

AbstractAs the volume of next generation sequencing data increases, an urgent need for algorithms to efficiently process the data arises. Universal hitting sets (UHS) were recently introduced as an alternative to the central idea of minimizers in sequence analysis with the hopes that they could more efficiently address common tasks such as computing hash functions for read overlap, sparse suffix arrays, and Bloom filters. A UHS is a set of k-mers that hit every sequence of length L, and can thus serve as indices to L-long sequences. Unfortunately, methods for computing small UHSs are not yet practical for real-world sequencing instances due to their serial and deterministic nature, which leads to long runtimes and high memory demands when handling typical values of k (e.g. k > 13). To address this bottleneck, we present two algorithmic innovations to significantly decrease runtime while keeping memory usage low: (i) we leverage advanced theoretical and architectural techniques to parallelize and decrease memory usage in calculating k-mer hitting numbers; and (ii) we build upon techniques from randomized Set Cover to select universal k-mers much faster. We implemented these innovations in PASHA, the first randomized parallel algorithm for generating near-optimal UHSs, which newly handles k > 13. We demonstrate empirically that PASHA produces sets only slightly larger than those of serial deterministic algorithms; moreover, the set size is provably guaranteed to be within a small factor of the optimal size. PASHA’s runtime and memory-usage improvements are orders of magnitude faster than the current best algorithms. We expect our newly-practical construction of UHSs to be adopted in many high-throughput sequence analysis pipelines.


2020 ◽  
pp. 139-234
Author(s):  
Thomas Mailund
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document