scholarly journals Latent variable model for aligning barcoded short-reads improves downstream analyses

2017 ◽  
Author(s):  
Ariya Shajii ◽  
Ibrahim Numanagić ◽  
Bonnie Berger

AbstractRecent years have seen the emergence of several “third-generation” sequencing platforms, each of which aims to address shortcomings of standard next-generation short-read sequencing by producing data that capture long-range information, thereby allowing us to access regions of the genome that are inaccessible with short-reads alone. These technologies either produce physically longer reads typically with higher error rates or instead capture long-range information at low error rates by virtue of read “barcodes” as in 10x Genomics’ Chromium platform. As with virtually all sequencing data, sequence alignment for third-generation sequencing data is the foundation on which all downstream analyses are based. Here we introduce a latent variable model for improving barcoded read alignment, thereby enabling improved downstream genotyping and phasing. We demonstrate the feasibility of this approach through developing EMerAld— or EMA for short— and testing it on the barcoded short-reads produced by 10x’s sequencing technologies. EMA not only produces more accurate alignments, but unlike other methods also assigns interpretable probabilities to the alignments it generates. We show that genotypes called from EMA’s alignments contain over 30% fewer false positives than those called from Lariat’s (the current 10x alignment tool), with a fewer number of false negatives, on datasets of NA12878 and NA24385 as compared to NIST GIAB gold standard variant calls. Moreover, we demonstrate that EMA is able to effectively resolve alignments in regions containing nearby homologous elements— a particularly challenging problem in read mapping— through the introduction of a novel statistical binning optimization framework, which allows us to find variants in the pharmacogenomically important CYP2D region that go undetected when using Lariat or BWA. Lastly, we show that EMA’s alignments improve phasing performance compared to Lariat’s in both NA12878 and NA24385, producing fewer switch/mismatch errors and larger phase blocks on average.EMA software and datasets used are available at http://ema.csail.mit.edu.

GigaScience ◽  
2020 ◽  
Vol 9 (10) ◽  
Author(s):  
Davide Bolognini ◽  
Alberto Magi ◽  
Vladimir Benes ◽  
Jan O Korbel ◽  
Tobias Rausch

Abstract Background Tandem repeat sequences are widespread in the human genome, and their expansions cause multiple repeat-mediated disorders. Genome-wide discovery approaches are needed to fully elucidate their roles in health and disease, but resolving tandem repeat variation accurately remains a challenging task. While traditional mapping-based approaches using short-read data have severe limitations in the size and type of tandem repeats they can resolve, recent third-generation sequencing technologies exhibit substantially higher sequencing error rates, which complicates repeat resolution. Results We developed TRiCoLOR, a freely available tool for tandem repeat profiling using error-prone long reads from third-generation sequencing technologies. The method can identify repetitive regions in sequencing data without a prior knowledge of their motifs or locations and resolve repeat multiplicity and period size in a haplotype-specific manner. The tool includes methods to interactively visualize the identified repeats and to trace their Mendelian consistency in pedigrees. Conclusions TRiCoLOR demonstrates excellent performance and improved sensitivity and specificity compared with alternative tools on synthetic data. For real human whole-genome sequencing data, TRiCoLOR achieves high validation rates, suggesting its suitability to identify tandem repeat variation in personal genomes.


2020 ◽  
Vol 15 ◽  
Author(s):  
Hongdong Li ◽  
Wenjing Zhang ◽  
Yuwen Luo ◽  
Jianxin Wang

Aims: Accurately detect isoforms from third generation sequencing data. Background: Transcriptome annotation is the basis for the analysis of gene expression and regulation. The transcriptome annotation of many organisms such as humans is far from incomplete, due partly to the challenge in the identification of isoforms that are produced from the same gene through alternative splicing. Third generation sequencing (TGS) reads provide unprecedented opportunity for detecting isoforms due to their long length that exceeds the length of most isoforms. One limitation of current TGS reads-based isoform detection methods is that they are exclusively based on sequence reads, without incorporating the sequence information of known isoforms. Objective: Develop an efficient method for isoform detection. Method: Based on annotated isoforms, we propose a splice isoform detection method called IsoDetect. First, the sequence at exon-exon junction is extracted from annotated isoforms as the “short feature sequence”, which is used to distinguish different splice isoforms. Second, we aligned these feature sequences to long reads and divided long reads into groups that contain the same set of feature sequences, thereby avoiding the pair-wise comparison among the large number of long reads. Third, clustering and consensus generation are carried out based on sequence similarity. For the long reads that do not contain any short feature sequence, clustering analysis based on sequence similarity is performed to identify isoforms. Result: Tested on two datasets from Calypte Anna and Zebra Finch, IsoDetect showed higher speed and compelling accuracy compared with four existing methods. Conclusion: IsoDetect is a promising method for isoform detection. Other: This paper was accepted by the CBC2019 conference.


2020 ◽  
Vol 17 (12) ◽  
pp. 5205-5209
Author(s):  
Ali Elbialy ◽  
M. A. El-Dosuky ◽  
Ibrahim M. El-Henawy

Third generation sequencing (TGS) relates to long reads but with relatively high error rates. Quality of TGS is a hot topic, dealing with errors. This paper combines and investigates three quality related metrics. They are basecalling accuracy, Phred Quality Scores, and GC content. For basecalling accuracy, a deep neural network is adopted. The measured loss does not exceed 5.42.


2017 ◽  
Author(s):  
Krešimir Križanović ◽  
Ivan Sović ◽  
Ivan Krpelnik ◽  
Mile Šikić

AbstractNext generation sequencing technologies have made RNA sequencing widely accessible and applicable in many areas of research. In recent years, 3rd generation sequencing technologies have matured and are slowly replacing NGS for DNA sequencing. This paper presents a novel tool for RNA mapping guided by gene annotations. The tool is an adapted version of a previously developed DNA mapper – GraphMap, tailored for third generation sequencing data, such as those produced by Pacific Biosciences or Oxford Nanopore Technologies devices. It uses gene annotations to generate a transcriptome, uses a DNA mapping algorithm to map reads to the transcriptome, and finally transforms the mappings back to genome coordinates. Modified version of GraphMap is compared on several synthetic datasets to the state-of-the-art RNAseq mappers enabled to work with third generation sequencing data. The results show that our tool outperforms other tools in general mapping quality.


BMC Genomics ◽  
2020 ◽  
Vol 21 (S10) ◽  
Author(s):  
Jiaqi Liu ◽  
Jiayin Wang ◽  
Xiao Xiao ◽  
Xin Lai ◽  
Daocheng Dai ◽  
...  

Abstract Background The emergence of the third generation sequencing technology, featuring longer read lengths, has demonstrated great advancement compared to the next generation sequencing technology and greatly promoted the biological research. However, the third generation sequencing data has a high level of the sequencing error rates, which inevitably affects the downstream analysis. Although the issue of sequencing error has been improving these years, large amounts of data were produced at high sequencing errors, and huge waste will be caused if they are discarded. Thus, the error correction for the third generation sequencing data is especially important. The existing error correction methods have poor performances at heterozygous sites, which are ubiquitous in diploid and polyploidy organisms. Therefore, it is a lack of error correction algorithms for the heterozygous loci, especially at low coverages. Results In this article, we propose a error correction method, named QIHC. QIHC is a hybrid correction method, which needs both the next generation and third generation sequencing data. QIHC greatly enhances the sensitivity of identifying the heterozygous sites from sequencing errors, which leads to a high accuracy on error correction. To achieve this, QIHC established a set of probabilistic models based on Bayesian classifier, to estimate the heterozygosity of a site and makes a judgment by calculating the posterior probabilities. The proposed method is consisted of three modules, which respectively generates a pseudo reference sequence, obtains the read alignments, estimates the heterozygosity the sites and corrects the read harboring them. The last module is the core module of QIHC, which is designed to fit for the calculations of multiple cases at a heterozygous site. The other two modules enable the reads mapping to the pseudo reference sequence which somehow overcomes the inefficiency of multiple mappings that adopt by the existing error correction methods. Conclusions To verify the performance of our method, we selected Canu and Jabba to compare with QIHC in several aspects. As a hybrid correction method, we first conducted a groups of experiments under different coverages of the next-generation sequencing data. QIHC is far ahead of Jabba on accuracy. Meanwhile, we varied the coverages of the third generation sequencing data and compared performances again among Canu, Jabba and QIHC. QIHC outperforms the other two methods on accuracy of both correcting the sequencing errors and identifying the heterozygous sites, especially at low coverage. We carried out a comparison analysis between Canu and QIHC on the different error rates of the third generation sequencing data. QIHC still performs better. Therefore, QIHC is superior to the existing error correction methods when heterozygous sites exist.


2020 ◽  
Author(s):  
Abdulqader Jighly

AbstractIndexing of DNA sequences is the art of sorting massive genomic data in a user-friendly structure to enable rapid accessing and comparing of different patterns in the data. Current genome assemblers use general algorithms for string indexing that do not exploit the special structural arrangement of genomes. Here, I am proposing a new algorithm that indexes only the configuration of microsatellite motifs along reads assuming that the order of microsatellites will be the same in overlapped sequences. The index size is >1000 times smaller than currently used indices and it has higher tolerance to the high error rates produced by third generation sequencing platforms. The results showed that the proposed algorithm can rapidly detect overlaps among considerable proportion of uncorrected long reads (~50% of all simulated base pairs with average read size of 8.16 kb and total error rates of 14.4%) to build large initial contigs. Unassembled reads can be then mapped to these contigs or can be assembled with them with currently used algorithms. Thus, the proposed algorithm can efficiently be used as an initial stage to significantly reduce the number of pairwise sequence comparisons among reads and/or references and improve the performance of different software but not replacing them. The algorithm was also useful for comparative genomics and detect large locally colinear blocks and structural variations among ten saccharomyces cerevisiae strains. The proposed algorithm has the power to make de novo assembly of individuals as routine activity which can lead to more accurate variant calling and pan genomics.


2021 ◽  
Author(s):  
Marek Kokot ◽  
Adam Gudys ◽  
Heng Li ◽  
Sebastian Deorowicz

The costs of maintaining exabytes of data produced by sequencing experiments every year has become a major issue in today's genomics. In spite of the increasing popularity of the third generation sequencing, the existing algorithms for compressing long reads exhibit minor advantage over general purpose gzip. We present CoLoRd, an algorithm able to reduce 3rd generation sequencing data by an order of magnitude without affecting the accuracy of downstream analyzes.


2019 ◽  
Author(s):  
Camille Marchet ◽  
Pierre Morisse ◽  
Lolita Lecompte ◽  
Arnaud Lefebvre ◽  
Thierry Lecroq ◽  
...  

AbstractMotivationIn the last few years, the error rates of third generation sequencing data have been capped above 5%, including many insertions and deletions. Thereby, an increasing number of long reads correction methods have been proposed to reduce the noise in these sequences. Whether hybrid or self-correction methods, there exist multiple approaches to correct long reads. As the quality of the error correction has huge impacts on downstream processes, developing methods allowing to evaluate error correction tools with precise and reliable statistics is therefore a crucial need. Since error correction is often a resource bottleneck in long reads pipelines, a key feature of assessment methods is therefore to be efficient, in order to allow the fast comparison of different tools.ResultsWe propose ELECTOR, a reliable and efficient tool to evaluate long reads correction, that enables the evaluation of hybrid and self-correction methods. Our tool provides a complete and relevant set of metrics to assess the read quality improvement after correction and scales to large datasets. ELECTOR is directly compatible with a wide range of state-of-the-art error correction tools, using whether simulated or real long reads. We show that ELECTOR displays a wider range of metrics than the state-of-the-art tool, LRCstats, and additionally importantly decreases the runtime needed for assessment on all the studied datasets.AvailabilityELECTOR is available at https://github.com/kamimrcht/[email protected] or [email protected]


Sign in / Sign up

Export Citation Format

Share Document