decoding speed
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 10)

H-INDEX

4
(FIVE YEARS 1)

2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Miaoshan Lu ◽  
Shaowei An ◽  
Ruimin Wang ◽  
Jinyin Wang ◽  
Changbin Yu

Abstract Background With the precision of the mass spectrometry (MS) going higher, the MS file size increases rapidly. Beyond the widely-used open format mzML, near-lossless or lossless compression algorithms and formats emerged in scenarios with different precision requirements. The data precision is often related to the instrument and subsequent processing algorithms. Unlike storage-oriented formats, which focus more on lossless compression rate, computation-oriented formats concentrate as much on decoding speed as the compression rate. Results Here we introduce “Aird”, an opensource and computation-oriented format with controllable precision, flexible indexing strategies, and high compression rate. Aird provides a novel compressor called Zlib-Diff-PforDelta (ZDPD) for m/z data. Compared with Zlib only, m/z data size is about 55% lower in Aird average. With the high-speed decoding and encoding performance of the single instruction multiple data technology used in the ZDPD, Aird merely takes 33% decoding time compared with Zlib. We have downloaded seven datasets from ProteomeXchange and Metabolights. They are from different SCIEX, Thermo, and Agilent instruments. Then we convert the raw data into mzML, mgf, and mz5 file formats by MSConvert and compare them with Aird format. Aird uses JavaScript Object Notation for metadata storage. Aird-SDK is written in Java, and AirdPro is a GUI client for vendor file converting written in C#. They are freely available at https://github.com/CSi-Studio/Aird-SDK and https://github.com/CSi-Studio/AirdPro. Conclusions With the innovation of MS acquisition mode, MS data characteristics are also constantly changing. New data features can bring more effective compression methods and new index modes to achieve high search performance. The MS data storage mode will also become professional and customized. ZDPD uses multiple MS digital features, and researchers also can use it in other formats like mzML. Aird is designed to become a computing-oriented data format with high scalability, compression rate, and fast decoding speed.


2021 ◽  
Vol 11 (2) ◽  
pp. 171 ◽  
Author(s):  
Sara Bertoni ◽  
Sandro Franceschini ◽  
Giovanna Puccio ◽  
Martina Mancarella ◽  
Simone Gori ◽  
...  

Reading acquisition is extremely difficult for about 5% of children because they are affected by a heritable neurobiological disorder called developmental dyslexia (DD). Intervention studies can be used to investigate the causal role of neurocognitive deficits in DD. Recently, it has been proposed that action video games (AVGs)—enhancing attentional control—could improve perception and working memory as well as reading skills. In a partial crossover intervention study, we investigated the effect of AVG and non-AVG training on attentional control using a conjunction visual search task in children with DD. We also measured the non-alphanumeric rapid automatized naming (RAN), phonological decoding and word reading before and after AVG and non-AVG training. After both video game training sessions no effect was found in non-alphanumeric RAN and in word reading performance. However, after only 12 h of AVG training the attentional control was improved (i.e., the set-size slopes were flatter in visual search) and phonological decoding speed was accelerated. Crucially, attentional control and phonological decoding speed were increased only in DD children whose video game score was highly efficient after the AVG training. We demonstrated that only an efficient AVG training induces a plasticity of the fronto-parietal attentional control linked to a selective phonological decoding improvement in children with DD.


2021 ◽  
Vol 7 (2) ◽  
pp. 14-17
Author(s):  
B.I. Filippov ◽  

In the process of algebraic decoding of BCH codes over the field GF(q) with the word length n = qm-1, correcting t errors, both in the time and frequency domains, it is necessary to find the error locator polynomial ?(x) as the least polynomial for which the key equation. Berlekamp proposed a simple iterative scheme, which was called the Berlekamp-Messi algorithm, and is currently used in most practical applications. Comparative statistical tests of the proposed decoder and decoder using the Berlikamp-Messi algorithm showed that they differ slightly in decoding speed. The proposed algorithm is implemented in the environment in Turbo Pascal and can be used for the entire family of BCH codes by replacing the primitive Galois polynomial.


2021 ◽  
Vol 9 ◽  
pp. 311-328
Author(s):  
Weijia Xu ◽  
Marine Carpuat

Abstract We introduce an Edit-Based TransfOrmer with Repositioning (EDITOR), which makes sequence generation flexible by seamlessly allowing users to specify preferences in output lexical choice. Building on recent models for non-autoregressive sequence generation (Gu et al., 2019), EDITOR generates new sequences by iteratively editing hypotheses. It relies on a novel reposition operation designed to disentangle lexical choice from word positioning decisions, while enabling efficient oracles for imitation learning and parallel edits at decoding time. Empirically, EDITOR uses soft lexical constraints more effectively than the Levenshtein Transformer (Gu et al., 2019) while speeding up decoding dramatically compared to constrained beam search (Post and Vilar, 2018). EDITOR also achieves comparable or better translation quality with faster decoding speed than the Levenshtein Transformer on standard Romanian-English, English-German, and English-Japanese machine translation tasks.


Author(s):  
Adrian Bărbulescu ◽  
Daniel I. Morariu

AbstractIn this paper, we present a wide range of models based on less adaptive and adaptive approaches for a PoS tagging system. These parameters for the adaptive approach are based on the n-gram of the Hidden Markov Model, evaluated for bigram and trigram, and based on three different types of decoding method, in this case forward, backward, and bidirectional. We used the Brown Corpus for the training and the testing phase. The bidirectional trigram model almost reaches state of the art accuracy but is disadvantaged by the decoding speed time while the backward trigram reaches almost the same results with a way better decoding speed time. By these results, we can conclude that the decoding procedure it’s way better when it evaluates the sentence from the last word to the first word and although the backward trigram model is very good, we still recommend the bidirectional trigram model when we want good precision on real data.


2020 ◽  
pp. 1-1
Author(s):  
Hosein K. Nazari ◽  
Kiana Ghassabi ◽  
Peyman Pahlevani ◽  
Daniel E. Lucani

Author(s):  
Nikita Markovnikov ◽  
Irina Kipyatkova

Problem: Classical systems of automatic speech recognition are traditionally built using an acoustic model based on hidden Markovmodels and a statistical language model. Such systems demonstrate high recognition accuracy, but consist of several independentcomplex parts, which can cause problems when building models. Recently, an end-to-end recognition method has been spread, usingdeep artificial neural networks. This approach makes it easy to implement models using just one neural network. End-to-end modelsoften demonstrate better performance in terms of speed and accuracy of speech recognition. Purpose: Implementation of end-toendmodels for the recognition of continuous Russian speech, their adjustment and comparison with hybrid base models in terms ofrecognition accuracy and computational characteristics, such as the speed of learning and decoding. Methods: Creating an encoderdecodermodel of speech recognition using an attention mechanism; applying techniques of stabilization and regularization of neuralnetworks; augmentation of data for training; using parts of words as an output of a neural network. Results: An encoder-decodermodel was obtained using an attention mechanism for recognizing continuous Russian speech without extracting features or usinga language model. As elements of the output sequence, we used parts of words from the training set. The resulting model could notsurpass the basic hybrid models, but surpassed the other baseline end-to-end models, both in recognition accuracy and in decoding/learning speed. The word recognition error was 24.17% and the decoding speed was 0.3 of the real time, which is 6% faster than thebaseline end-to-end model and 46% faster than the basic hybrid model. We showed that end-to-end models could work without languagemodels for the Russian language, while demonstrating a higher decoding speed than hybrid models. The resulting model was trained onraw data without extracting any features. We found that for the Russian language the hybrid type of an attention mechanism gives thebest result compared to location-based or context-based attention mechanisms. Practical relevance: The resulting models require lessmemory and less speech decoding time than the traditional hybrid models. That fact can allow them to be used locally on mobile deviceswithout using calculations on remote servers.


Sign in / Sign up

Export Citation Format

Share Document