scholarly journals Optical Genome Mapping in Routine Human Genetic Diagnostics—Its Advantages and Limitations

Genes ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1958
Author(s):  
Paul Dremsek ◽  
Thomas Schwarz ◽  
Beatrix Weil ◽  
Alina Malashka ◽  
Franco Laccone ◽  
...  

In recent years, optical genome mapping (OGM) has developed into a highly promising method of detecting large-scale structural variants in human genomes. It is capable of detecting structural variants considered difficult to detect by other current methods. Hence, it promises to be feasible as a first-line diagnostic tool, permitting insight into a new realm of previously unknown variants. However, due to its novelty, little experience with OGM is available to infer best practices for its application or to clarify which features cannot be detected. In this study, we used the Saphyr system (Bionano Genomics, San Diego, CA, USA), to explore its capabilities in human genetic diagnostics. To this end, we tested 14 DNA samples to confirm a total of 14 different structural or numerical chromosomal variants originally detected by other means, namely, deletions, duplications, inversions, trisomies, and a translocation. Overall, 12 variants could be confirmed; one deletion and one inversion could not. The prerequisites for detection of similar variants were explored by reviewing the OGM data of 54 samples analyzed in our laboratory. Limitations, some owing to the novelty of the method and some inherent to it, were described. Finally, we tested the successful application of OGM in routine diagnostics and described some of the challenges that merit consideration when utilizing OGM as a diagnostic tool.

2019 ◽  
Vol 292 ◽  
pp. 03007
Author(s):  
Marko Mijač ◽  
Ruben Picek ◽  
Darko Andročec

From the very beginnings of software era, enterprise environments have been one of the greatest generators of demand for complex software systems. Attempts to satisfy these ever growing needs for enterprise software systems have had a m ixed success. Software reuse has been one of the leverage mechanisms software producers had at their disposal, and through the year’s different reuse approaches emerged. One of the most successful large scale reuse approaches in enterprise environment are Enterprise Resource Planning (ERP) systems, which intend to reuse domain analysis and best practices in doing business, as well as software design and implementation. However, in literature, ERP systems are seldom viewed and described as one of the reuse approaches. Therefore, in this paper we aim at systematically analyzing ERP systems from the software reuse perspective in order to gain better insight into their characteristics, constituent elements, and relationships.


Author(s):  
Lynne Siemens

Humanists are participating in collaborations with others in the academy and beyond to explore increasingly complex research questions with technologically oriented methodologies and access to advice, mentoring, technology, knowledge, and funds. Although these projects have clear benefits for all those involved, these collaborations are not without their challenges. Such styles of partnership tend to be more common on the science side of campus. As a result, little is understood about the ways that they might work within the humanities and the range of benefits that can be available to members within a mature collaboration. To this end, this paper will examine the experiences of Implementing New Knowledge Environments (INKE) as a mature, large-scale collaboration working with academic and non-academic partners and will provide some insight into best practices.


Author(s):  
Yiwei Niu ◽  
Xueyi Teng ◽  
Yirong Shi ◽  
Yanyan Li ◽  
Yiheng Tang ◽  
...  

AbstractMobile element insertions (MEIs) are a major class of structural variants (SVs) and have been linked to many human genetic disorders, including hemophilia, neurofibromatosis, and various cancers. However, human MEI resources from large-scale genome sequencing are still lacking compared to those for SNPs and SVs. Here, we report a comprehensive map of 36,699 non-reference MEIs constructed from 5,675 genomes, comprising 2,998 Chinese samples (∼26.2X, NyuWa) and 2,677 samples from the 1000 Genomes Project (∼7.4X, 1KGP). We discovered that LINE-1 insertions were highly enriched at centromere regions, implying the role of chromosome context in retroelement insertion. After functional annotation, we estimated that MEIs are responsible for about 9.3% of all protein-truncating events per genome. Finally, we built a companion database named HMEID for public use. This resource represents the latest and largest genomewide study on MEIs and will have broad utility for exploration of human MEI findings.


Author(s):  
Lynne Siemens ◽  
INKE Research Group

Background: This article examines Implementing New Knowledge Environments’ (INKE) experiences as a mature, large-scale collaboration working with academic and non-academic partners and provides some insight into best practices. It looks at the sixth year of funded research.Analysis: The study uses semi-structured interviews with questions focused on the nature of collaboration with selected members of the INKE research team. Data analysis employs a grounded theory approach.Conclusion and implication: The interviewees found the experience of collaborating within INKE to be positive with some ongoing challenges. The team is winding down as it moves into the final year of funded research. This suggests an arc of collaboration, with intensity of collaboration building from the first year to the most intensive time in the middle years and then winding down in the last years of grant funding. This article contributes to those lessons about collaboration by exploring the lived experience of a long-term, large-scale research project.


BMC Genomics ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Surajit Bhattacharya ◽  
Hayk Barseghyan ◽  
Emmanuèle C. Délot ◽  
Eric Vilain

Abstract Background Whole genome sequencing is effective at identification of small variants, but because it is based on short reads, assessment of structural variants (SVs) is limited. The advent of Optical Genome Mapping (OGM), which utilizes long fluorescently labeled DNA molecules for de novo genome assembly and SV calling, has allowed for increased sensitivity and specificity in SV detection. However, compared to small variant annotation tools, OGM-based SV annotation software has seen little development, and currently available SV annotation tools do not provide sufficient information for determination of variant pathogenicity. Results We developed an R-based package, nanotatoR, which provides comprehensive annotation as a tool for SV classification. nanotatoR uses both external (DGV; DECIPHER; Bionano Genomics BNDB) and internal (user-defined) databases to estimate SV frequency. Human genome reference GRCh37/38-based BED files are used to annotate SVs with overlapping, upstream, and downstream genes. Overlap percentages and distances for nearest genes are calculated and can be used for filtration. A primary gene list is extracted from public databases based on the patient’s phenotype and used to filter genes overlapping SVs, providing the analyst with an easy way to prioritize variants. If available, expression of overlapping or nearby genes of interest is extracted (e.g. from an RNA-Seq dataset, allowing the user to assess the effects of SVs on the transcriptome). Most quality-control filtration parameters are customizable by the user. The output is given in an Excel file format, subdivided into multiple sheets based on SV type and inheritance pattern (INDELs, inversions, translocations, de novo, etc.). nanotatoR passed all quality and run time criteria of Bioconductor, where it was accepted in the April 2019 release. We evaluated nanotatoR’s annotation capabilities using publicly available reference datasets: the singleton sample NA12878, mapped with two types of enzyme labeling, and the NA24143 trio. nanotatoR was also able to accurately filter the known pathogenic variants in a cohort of patients with Duchenne Muscular Dystrophy for which we had previously demonstrated the diagnostic ability of OGM. Conclusions The extensive annotation enables users to rapidly identify potential pathogenic SVs, a critical step toward use of OGM in the clinical setting.


2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


ABI-Technik ◽  
2020 ◽  
Vol 40 (4) ◽  
pp. 357-364
Author(s):  
Martin Lee ◽  
Christina Riesenweber

AbstractThe authors of this article have been managing a large change project at the university library of Freie Universität Berlin since January 2019. At the time of writing this in the summer of 2020, the project is about halfway completed. With this text, we would like to give some insight into our work and the challenges we faced, thereby starting conversations with similar undertakings in the future.


2021 ◽  
Vol 10 (7) ◽  
pp. 432
Author(s):  
Nicolai Moos ◽  
Carsten Juergens ◽  
Andreas P. Redecker

This paper describes a methodological approach that is able to analyse socio-demographic and -economic data in large-scale spatial detail. Based on the two variables, population density and annual income, one investigates the spatial relationship of these variables to identify locations of imbalance or disparities assisted by bivariate choropleth maps. The aim is to gain a deeper insight into spatial components of socioeconomic nexuses, such as the relationships between the two variables, especially for high-resolution spatial units. The used methodology is able to assist political decision-making, target group advertising in the field of geo-marketing and for the site searches of new shop locations, as well as further socioeconomic research and urban planning. The developed methodology was tested in a national case study in Germany and is easily transferrable to other countries with comparable datasets. The analysis was carried out utilising data about population density and average annual income linked to spatially referenced polygons of postal codes. These were disaggregated initially via a readapted three-class dasymetric mapping approach and allocated to large-scale city block polygons. Univariate and bivariate choropleth maps generated from the resulting datasets were then used to identify and compare spatial economic disparities for a study area in North Rhine-Westphalia (NRW), Germany. Subsequently, based on these variables, a multivariate clustering approach was conducted for a demonstration area in Dortmund. In the result, it was obvious that the spatially disaggregated data allow more detailed insight into spatial patterns of socioeconomic attributes than the coarser data related to postal code polygons.


Sign in / Sign up

Export Citation Format

Share Document