scholarly journals Research on Machine Learning-Based Error Correction Algorithm for Spoken French

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jie Gao

In order to overcome the problems of low error capture accuracy and long response time of traditional spoken French error correction algorithms, this study designed a French spoken error correction algorithm based on machine learning. Based on the construction of the French spoken pronunciation signal model, the algorithm analyzes the spectral features of French spoken pronunciation and then selects and classifies the features and captures the abnormal pronunciation signals. Based on this, the machine learning network architecture and the training process of the machine learning network are designed, and the operation structure of the algorithm, the algorithm program, the algorithm development environment, and the identification of oral errors are designed to complete the correction of oral French errors. Experimental results show that the proposed algorithm has high error capture accuracy and short response time, which prove its high efficiency and timeliness.

2017 ◽  
Author(s):  
Can Firtina ◽  
Ziv Bar-Joseph ◽  
Can Alkan ◽  
A. Ercument Cicek

AbstractMotivationChoosing whether to use second or third generation sequencing platforms can lead to trade-offs between accuracy and read length. Several studies require long and accurate reads including de novo assembly, fusion and structural variation detection. In such cases researchers often combine both technologies and the more erroneous long reads are corrected using the short reads. Current approaches rely on various graph based alignment techniques and do not take the error profile of the underlying technology into account. Memory- and time-efficient machine learning algorithms that address these shortcomings have the potential to achieve better and more accurate integration of these two technologies.ResultsWe designed and developed Hercules, the first machine learning-based long read error correction algorithm. The algorithm models every long read as a profile Hidden Markov Model with respect to the underlying platform’s error profile. The algorithm learns a posterior transition/emission probability distribution for each long read and uses this to correct errors in these reads. Using datasets from two DNA-seq BAC clones (CH17-157L1 and CH17-227A2), and human brain cerebellum polyA RNA-seq, we show that Hercules-corrected reads have the highest mapping rate among all competing algorithms and highest accuracy when most of the basepairs of a long read are covered with short reads.AvailabilityHercules source code is available at https://github.com/BilkentCompGen/Hercules


2015 ◽  
Author(s):  
Sara Goodwin ◽  
James Gurtowski ◽  
Scott Ethe-Sayers ◽  
Panchajanya Deshpande ◽  
Michael Schatz ◽  
...  

Monitoring the progress of DNA molecules through a membrane pore has been postulated as a method for sequencing DNA for several decades. Recently, a nanopore-based sequencing instrument, the Oxford Nanopore MinION, has become available that we used for sequencing the S. cerevisiae genome. To make use of these data, we developed a novel open-source hybrid error correction algorithm Nanocorr (https://github.com/jgurtowski/nanocorr) specifically for Oxford Nanopore reads, as existing packages were incapable of assembling the long read lengths (5-50kbp) at such high error rate (between ~5 and 40% error). With this new method we were able to perform a hybrid error correction of the nanopore reads using complementary MiSeq data and produce a de novo assembly that is highly contiguous and accurate: the contig N50 length is more than ten-times greater than an Illumina-only assembly (678kb versus 59.9kbp), and has greater than 99.88% consensus identity when compared to the reference. Furthermore, the assembly with the long nanopore reads presents a much more complete representation of the features of the genome and correctly assembles gene cassettes, rRNAs, transposable elements, and other genomic features that were almost entirely absent in the Illumina-only assembly.


2019 ◽  
Vol 17 (02) ◽  
pp. 1950013
Author(s):  
Shi-Biao Tang ◽  
Jie Cheng

In the process of quantum key distribution (QKD), error correction algorithm is used to correct the error bits of the key at both ends. The existing applied QKD system has a low key rate and is generally Kbps of magnitude. Therefore, the performance requirement of data processing such as error correction is not high. In order to cope with the development demand of high-speed QKD system in the future, this paper introduces the Winnow algorithm to realize high-speed parity and hamming error correction based on Field Programmable Gate Array (FPGA), and explores the performance limit of this algorithm. FPGA hardware implementation can achieve the scale of Mbps bandwidth, with choosing different group length of sifted key by different error rate, and can achieve higher error correction efficiency by reducing the information leakage in the process of error correction, and improves the QKD system’s secure key rate, thus helping the future high-speed QKD system.


Sign in / Sign up

Export Citation Format

Share Document