scholarly journals HapCHAT: adaptive haplotype assembly for efficiently leveraging high coverage in long reads

2018 ◽  
Vol 19 (1) ◽  
Author(s):  
Stefano Beretta ◽  
Murray D. Patterson ◽  
Simone Zaccaria ◽  
Gianluca Della Vedova ◽  
Paola Bonizzoni
2017 ◽  
Author(s):  
Stefano Beretta ◽  
Murray D Patterson ◽  
Simone Zaccaria ◽  
Gianluca Della Vedova ◽  
Paola Bonizzoni

AbstractBackgroundHaplotype assembly is the process of assigning the different alleles of the variants covered by mapped sequencing reads to the two haplotypes of the genome of a human individual. Long reads, which are nowadays cheaper to produce and more widely available than ever before, have been used to reduce the fragmentation of the assembled haplotypes since their ability to span several variants along the genome. These long reads are also characterized by a high error rate, an issue which may be mitigated, however, with larger sets of reads, when this error rate is uniform across genome positions. Unfortunately, current state-of-the-art dynamic programming approaches designed for long reads deal only with limited coverages.ResultsHere, we propose a new method for assembling haplotypes which combines and extends the features of previous approaches to deal with long reads and higher coverages. In particular, our algorithm is able to dynamically adapt the estimated number of errors at each variant site, while minimizing the total number of error corrections necessary for finding a feasible solution. This allows our method to significantly reduce the required computational resources, allowing to consider datasets composed of higher coverages. The algorithm has been implemented in a freely available tool, HapCHAT: Haplotype Assembly Coverage Handling by Adapting Thresholds. An experimental analysis on sequencing reads with up to 60× coverage reveals improvements in accuracy and recall achieved by considering a higher coverage with lower runtimes.ConclusionsOur method leverages the long-range information of sequencing reads that allows to obtain assembled haplotypes fragmented in a lower number of unphased haplotype blocks. At the same time, our method is also able to deal with higher coverages to better correct the errors in the original reads and to obtain more accurate haplotypes as a result.AvailabilityHapCHAT is available at http://hapchat.algolab.eu under the GPL license.


2019 ◽  
Author(s):  
Alberto Magi

AbstractBackgroundHuman genomes are diploid, which means they have two homologous copies of each chromosome and the assignment of heterozygous variants to each chromosome copy, the haplotype assembly problem, is of fundamental importance for medical and population genetics.While short reads from second generation sequencing platforms drastically limit haplotype reconstruction as the great majority of reads do not allow to link many variants together, novel long reads from third generation sequencing can span several variants along the genome allowing to infer much longer haplotype blocks.However, the great majority of haplotype assembly algorithms, originally devised for short sequences, fail when they are applied to noisy long reads data, and although novel algorithm have been properly developed to deal with the properties of this new generation of sequences, these methods are capable to manage only datasets with limited coverages.ResultsTo overcome the limits of currently available algorithms, I propose a novel formulation of the single individual haplotype assembly problem, based on maximum allele co-occurrence (MAC) and I develop an ultra-fast algorithm that is capable to reconstruct the haplotype structure of a diploid genome from low- and high-coverage long read datasets with high accuracy. I test my algorithm (MAtCHap) on synthetic and real PacBio and Nanopore human dataset and I compare its result with other eight state-of-the-art algorithms. All the results obtained by these analyses show that MAtCHap outperforms other methods in terms of accuracy, contiguity, completeness and computational speed.AvailabilityMAtCHap is publicly available at https://sourceforge.net/projects/matchap/.


Author(s):  
Guangtu Gao ◽  
Susana Magadan ◽  
Geoffrey C Waldbieser ◽  
Ramey C Youngblood ◽  
Paul A Wheeler ◽  
...  

Abstract Currently, there is still a need to improve the contiguity of the rainbow trout reference genome and to use multiple genetic backgrounds that will represent the genetic diversity of this species. The Arlee doubled haploid line was originated from a domesticated hatchery strain that was originally collected from the northern California coast. The Canu pipeline was used to generate the Arlee line genome de-novo assembly from high coverage PacBio long-reads sequence data. The assembly was further improved with Bionano optical maps and Hi-C proximity ligation sequence data to generate 32 major scaffolds corresponding to the karyotype of the Arlee line (2 N = 64). It is composed of 938 scaffolds with N50 of 39.16 Mb and a total length of 2.33 Gb, of which ∼95% was in 32 chromosome sequences with only 438 gaps between contigs and scaffolds. In rainbow trout the haploid chromosome number can vary from 29 to 32. In the Arlee karyotype the haploid chromosome number is 32 because chromosomes Omy04, 14 and 25 are divided into six acrocentric chromosomes. Additional structural variations that were identified in the Arlee genome included the major inversions on chromosomes Omy05 and Omy20 and additional 15 smaller inversions that will require further validation. This is also the first rainbow trout genome assembly that includes a scaffold with the sex-determination gene (sdY) in the chromosome Y sequence. The utility of this genome assembly is demonstrated through the improved annotation of the duplicated genome loci that harbor the IGH genes on chromosomes Omy12 and Omy13.


Author(s):  
Chian Teng Ong ◽  
Elizabeth M Ross ◽  
Gry B Boe-Hansen ◽  
Conny Turni ◽  
Ben J Hayes ◽  
...  

Abstract Animal metagenomic studies, in which host-associated microbiomes are profiled, are an increasingly important contribution to our understanding of the physiological functions, health and susceptibility to diseases of livestock. One of the major challenges in these studies is host DNA contamination, which limits the sequencing capacity for metagenomic content and reduces the accuracy of metagenomic profiling. This is the first study comparing the effectiveness of different sequencing methods for profiling bovine vaginal metagenomic samples. We compared the new method of Oxford Nanopore Technologies (ONT) adaptive sequencing, which can be used to target or eliminate defined genetic sequences, to standard ONT sequencing, Illumina 16S rDNA amplicon sequencing, and Illumina shotgun sequencing. The efficiency of each method in recovering the metagenomic data and recalling the metagenomic profiles was assessed. ONT adaptive sequencing yielded a higher amount of metagenomic data than the other methods per 1 Gb of sequence data. The increased sequencing efficiency of ONT adaptive sequencing consequently reduced the amount of raw data needed to provide sufficient coverage for the metagenomic samples with high host-to-microbe DNA ratio. Additionally, the long reads generated by ONT adaptive sequencing retained the continuity of read information, which benefited the in-depth annotations for both taxonomical and functional profiles of the metagenome. The different methods resulted in the identification of different taxa. Genera Clostridium, which was identified at low abundances and categorised under Order “Unclassified Clostridiales” when using the 16S rDNA amplicon sequencing method, was identified to be the dominant genera in the sample when sequenced with the three other methods. Additionally, higher numbers of annotated genes were identified with ONT adaptive sequencing, which also produced high coverage on most of the commonly annotated genes. This study illustrates the advantages of ONT adaptive sequencing in improving the amount of metagenomic data derived from microbiome samples with high host-to-microbe DNA ratio and the advantage of long reads in preserving intact information for accurate annotations.


2014 ◽  
Author(s):  
Konstantin Berlin ◽  
Sergey Koren ◽  
Chen-Shan Chin ◽  
James Drake ◽  
Jane M Landolin ◽  
...  

We report reference-grade de novo assemblies of four model organisms and the human genome from single-molecule, real-time (SMRT) sequencing. Long-read SMRT sequencing is routinely used to finish microbial genomes, but the available assembly methods have not scaled well to larger genomes. Here we introduce the MinHash Alignment Process (MHAP) for efficient overlapping of noisy, long reads using probabilistic, locality-sensitive hashing. Together with Celera Assembler, MHAP was used to reconstruct the genomes of Escherichia coli, Saccharomyces cerevisiae, Arabidopsis thaliana, Drosophila melanogaster, and human from high-coverage SMRT sequencing. The resulting assemblies include fully resolved chromosome arms and close persistent gaps in these important reference genomes, including heterochromatic and telomeric transition sequences. For D. melanogaster, MHAP achieved a 600-fold speedup relative to prior methods and a cloud computing cost of a few hundred dollars. These results demonstrate that single-molecule sequencing alone can produce near-complete eukaryotic genomes at modest cost.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Shaya Akbarinejad ◽  
Mostafa Hadadian Nejad Yousefi ◽  
Maziar Goudarzi

Abstract Background Once aligned, long-reads can be a useful source of information to identify the type and position of structural variations. However, due to the high sequencing error of long reads, long-read structural variation detection methods are far from precise in low-coverage cases. To be accurate, they need to use high-coverage data, which in turn, results in an extremely time-consuming pipeline, especially in the alignment phase. Therefore, it is of utmost importance to have a structural variation calling pipeline which is both fast and precise for low-coverage data. Results In this paper, we present SVNN, a fast yet accurate, structural variation calling pipeline for PacBio long-reads that takes raw reads as the input and detects structural variants of size larger than 50 bp. Our pipeline utilizes state-of-the-art long-read aligners, namely NGMLR and Minimap2, and structural variation callers, videlicet Sniffle and SVIM. We found that by using a neural network, we can extract features from Minimap2 output to detect a subset of reads that provide useful information for structural variation detection. By only mapping this subset with NGMLR, which is far slower than Minimap2 but better serves downstream structural variation detection, we can increase the sensitivity in an efficient way. As a result of using multiple tools intelligently, SVNN achieves up to 20 percentage points of sensitivity improvement in comparison with state-of-the-art methods and is three times faster than a naive combination of state-of-the-art tools to achieve almost the same accuracy. Conclusion Since prohibitive costs of using high-coverage data have impeded long-read applications, with SVNN, we provide the users with a much faster structural variation detection platform for PacBio reads with high precision and sensitivity in low-coverage scenarios.


2021 ◽  
Vol 99 (Supplement_3) ◽  
pp. 241-242
Author(s):  
Karim Karimi ◽  
Duy Ngoc Do ◽  
Younes Miar

Abstract Development of genome-enabled selection and providing new insights in the genetic architecture of economically important traits are essential parts of mink breeding programs. Availability of a contagious genome assembly would guarantee the fundamental genomic studies in American mink (Neovison vison). Advances in long-read sequencing technologies have provided the opportunity to obtain high quality and free-gaps assemblies for different species. The objective of this study was to generate an accurate genome assembly using Single Molecule High-Fidelity (HiFi) Sequencing for American mink. The whole genome sequences of 100 mink were analyzed to select the most homozygous individual. A black American mink from Millbank Fur Farm (Rockwood, ON, Canada) was selected for PacBio sequencing. The total number of 2,884,047 HiFi reads with the average size of 20 kb were generated using three libraries of PacBio Sequel II System. Three de novo assemblers including wtdbg, Flye and IPA were used to obtain the initial draft of assembly using the long reads. The draft generated using Flye was selected as the final assembly based on the metrics of contiguity and completeness. The final assembly included 3,529 contigs with the N50 of 18.26 Mb and the largest contig of 62.16 Mb. The length of genome assembly was 2.66 Gb with 85 gaps. These results confirmed that high-coverage and accurate long-reads significantly improved the American mink genome assembly and successfully generated more contagious assembly. The chromosome conformation capture data will be integrated to the current draft to obtain a chromosome-level genome assembly for American mink at the next step of the project.


2015 ◽  
Vol 32 (11) ◽  
pp. 1610-1617 ◽  
Author(s):  
Yuri Pirola ◽  
Simone Zaccaria ◽  
Riccardo Dondi ◽  
Gunnar W. Klau ◽  
Nadia Pisanti ◽  
...  

Author(s):  
Sergey Nurk ◽  
Brian P. Walenz ◽  
Arang Rhie ◽  
Mitchell R. Vollger ◽  
Glennis A. Logsdon ◽  
...  

AbstractComplete and accurate genome assemblies form the basis of most downstream genomic analyses and are of critical importance. Recent genome assembly projects have relied on a combination of noisy long-read sequencing and accurate short-read sequencing, with the former offering greater assembly continuity and the latter providing higher consensus accuracy. The recently introduced PacBio HiFi sequencing technology bridges this divide by delivering long reads (>10 kbp) with high per-base accuracy (>99.9%). Here we present HiCanu, a significant modification of the Canu assembler designed to leverage the full potential of HiFi reads via homopolymer compression, overlap-based error correction, and aggressive false overlap filtering. We benchmark HiCanu with a focus on the recovery of haplotype diversity, major histocompatibility complex (MHC) variants, satellite DNAs, and segmental duplications. For diploid human genomes sequenced to 30× HiFi coverage, HiCanu achieved superior accuracy and allele recovery compared to the current state of the art. On the effectively haploid CHM13 human cell line, HiCanu achieved an NG50 contig size of 77 Mbp with a per-base consensus accuracy of 99.999% (QV50), surpassing recent assemblies of high-coverage, ultra-long Oxford Nanopore reads in terms of both accuracy and continuity. This HiCanu assembly correctly resolves 337 out of 341 validation BACs sampled from known segmental duplications and provides the first preliminary assemblies of 9 complete human centromeric regions. Although gaps and errors still remain within the most challenging regions of the genome, these results represent a significant advance towards the complete assembly of human genomes.AvailabilityHiCanu is implemented within the Canu assembly framework and is available from https://github.com/marbl/canu.


2021 ◽  
Vol 66 (2) ◽  
Author(s):  
Anton Shikov ◽  
Viktoriya Tsay ◽  
Mikhail Fedyakov ◽  
Yuri Eismont ◽  
Alena Rudnik ◽  
...  

The emergence of long-read sequencing technologies has made a revolutionary step in genome biology and medicine. However, long reads are characterized by a relatively high error rate, impairing their usage for variant calling as a part of routine practice. Thus, we here examine different popular variant callers on long-read sequences of the human mitochondrial genome, convenient in terms of small size and easily obtained high coverage. The sequencing of mitochondrial DNA from 8 patients was conducted via Illumina (MiSeq) and the Oxford Nanopore platform (MinION), with the former utilized as a gold standard when evaluating variant calling’s accuracy. We used a conventional GATK3-BWA-based pipeline for paired-end reads and Guppy basecaller coupled with minimap2 for MinION data, respectively. We then compared the outputs of Clairvoyante, Nanopolish, GATK3, Longshot, DeepVariant, and Varscan tools applied on long-read alignments by analyzing false-positive and false-negative rates. While for most callers, raw signals represented false positives due to homopolymeric errors, Nanopolish demonstrated both high similarity (Jaccard coefficient of 0.82) and a comparable number of calls with the Illumina data (140 vs. 154) with the best performance according to AUC (area under ROC curve, 0.953) as well. In sum, our results, despite being obtained from a small dataset, provide evidence that sufficient coverage coupled with an optimal pipeline could make long reads of mitochondrial DNA applicable for variant calling.


Sign in / Sign up

Export Citation Format

Share Document