scholarly journals MichelaNglo: sculpting protein views on web pages without coding

2020 ◽  
Vol 36 (10) ◽  
pp. 3268-3270 ◽  
Author(s):  
Matteo P Ferla ◽  
Alistair T Pagnamenta ◽  
David Damerell ◽  
Jenny C Taylor ◽  
Brian D Marsden

Abstract Motivation The sharing of macromolecular structural information online by scientists is predominantly performed via 2D static images, since the embedding of interactive 3D structures in webpages is non-trivial. Whilst the technologies to do so exist, they are often only implementable with significant web coding experience. Results Michelaɴɢʟo is an accessible and open-source web-based application that supports the generation, customization and sharing of interactive 3D macromolecular visualizations for digital media without requiring programming skills. A PyMOL file, PDB file, PDB identifier code or protein/gene name can be provided to form the basis of visualizations using the NGL JavaScript library. Hyperlinks that control the view can be added to text within the page. Protein-coding variants can be highlighted to support interpretation of their potential functional consequences. The resulting visualizations and text can be customized and shared, as well as embedded within existing websites by following instructions and using a self-contained download. Michelaɴɢʟo allows researchers to move away from static images and instead engage, describe and explain their protein to a wider audience in a more interactive fashion. Availability and implementation Michelaɴɢʟo is hosted at michelanglo.sgc.ox.ac.uk. The Python code is freely available at https://github.com/thesgc/MichelaNGLo, along with documentations about its implementation.

2019 ◽  
Vol 47 (W1) ◽  
pp. W106-W113 ◽  
Author(s):  
Jana Marie Schwarz ◽  
Daniela Hombach ◽  
Sebastian Köhler ◽  
David N Cooper ◽  
Markus Schuelke ◽  
...  

Abstract RegulationSpotter is a web-based tool for the user-friendly annotation and interpretation of DNA variants located outside of protein-coding transcripts (extratranscriptic variants). It is designed for clinicians and researchers who wish to assess the potential impact of the considerable number of non-coding variants found in Whole Genome Sequencing runs. It annotates individual variants with underlying regulatory features in an intuitive way by assessing over 100 genome-wide annotations. Additionally, it calculates a score, which reflects the regulatory potential of the variant region. Its dichotomous classifications, ‘functional’ or ‘non-functional’, and a human-readable presentation of the underlying evidence allow a biologically meaningful interpretation of the score. The output shows key aspects of every variant and allows rapid access to more detailed information about its possible role in gene regulation. RegulationSpotter can either analyse single variants or complete VCF files. Variants located within protein-coding transcripts are automatically assessed by MutationTaster as well as by RegulationSpotter to account for possible intragenic regulatory effects. RegulationSpotter offers the possibility of using phenotypic data to focus on known disease genes or genomic elements interacting with them. RegulationSpotter is freely available at https://www.regulationspotter.org.


2016 ◽  
Author(s):  
Valentina Iotchkova ◽  
Graham R.S. Ritchie ◽  
Matthias Geihs ◽  
Sandro Morganella ◽  
Josine L. Min ◽  
...  

Loci discovered by genome-wide association studies (GWAS) predominantly map outside protein-coding genes. The interpretation of functional consequences of non-coding variants can be greatly enhanced by catalogs of regulatory genomic regions in cell lines and primary tissues. However, robust and readily applicable methods are still lacking to systematically evaluate the contribution of these regions to genetic variation implicated in diseases or quantitative traits. Here we propose a novel approach that leverages GWAS findings with regulatory or functional annotations to classify features relevant to a phenotype of interest. Within our framework, we account for major sources of confounding that current methods do not offer. We further assess enrichment statistics for 27 GWAS traits within regulatory regions from the ENCODE and Roadmap projects. We characterise unique enrichment patterns for traits and annotations, driving novel biological insights. The method is implemented in standalone software and R package to facilitate its application by the research community.


JAMIA Open ◽  
2021 ◽  
Vol 4 (3) ◽  
Author(s):  
Elias DeVoe ◽  
Gavin R Oliver ◽  
Roman Zenka ◽  
Patrick R Blackburn ◽  
Margot A Cousin ◽  
...  

Abstract Motivation Genomic data are prevalent, leading to frequent encounters with uninterpreted variants or mutations with unknown mechanisms of effect. Researchers must manually aggregate data from multiple sources and across related proteins, mentally translating effects between the genome and proteome, to attempt to understand mechanisms. Materials and methods P2T2 presents diverse data and annotation types in a unified protein-centric view, facilitating the interpretation of coding variants and hypothesis generation. Information from primary sequence, domain, motif, and structural levels are presented and also organized into the first Paralog Annotation Analysis across the human proteome. Results Our tool assists research efforts to interpret genomic variation by aggregating diverse, relevant, and proteome-wide information into a unified interactive web-based interface. Additionally, we provide a REST API enabling automated data queries, or repurposing data for other studies. Conclusion The unified protein-centric interface presented in P2T2 will help researchers interpret novel variants identified through next-generation sequencing. Code and server link available at github.com/GenomicInterpretation/p2t2.


2021 ◽  
Author(s):  
Lambert Moyon ◽  
Camille Berthelot ◽  
Alexandra Louis ◽  
Nga Thi Thuy Nguyen ◽  
Hugues Roest Crollius

Whole genome sequencing is increasingly used to diagnose medical conditions of genetic origin. While both coding and non-coding DNA variants contribute to a wide range of diseases, most patients who receive a WGS-based diagnosis today harbour a protein-coding mutation. Functional interpretation and prioritization of non-coding variants represents a persistent challenge, and disease-causing non-coding variants remain largely unidentified. Depending on the disease, WGS fails to identify a candidate variant in 20-80% of patients, severely limiting the usefulness of sequencing for personalised medicine. Here we present FINSURF, a machine-learning approach to predict the functional impact of non-coding variants in regulatory regions. FINSURF outperforms state-of-the-art methods, owing to control optimisation during training. In addition to ranking candidate variants, FINSURF also delivers diagnostic information on functional consequences of mutations. We applied FINSURF to a diverse set of 30 diseases with described causative non-coding mutations, and correctly identified the disease-causative non-coding variant within the ten top hits in 22 cases. FINSURF is implemented as an online server to as well as custom browser tracks, and provides a quick and efficient solution to prioritize candidate non-coding variants in realistic clinical settings.


2021 ◽  
Author(s):  
Ling Li ◽  
Mingming Niu ◽  
Alyssa Erickson ◽  
Jie Luo ◽  
Kincaid Rowbotham ◽  
...  

AbstractIntegration of genomics and proteomics (proteogenomics) offers unprecedented promise for in-depth understanding of human diseases. However, sample mix-up is a pervasive, recurring problem, due to complex sample processing in proteogenomics. Here we present a pipeline for Sample Matching in Proteogenomics (SMAP) for verifying sample identity to ensure data integrity. SMAP infers sample-dependent protein-coding variants from quantitative mass spectrometry (MS), and aligns the MS-based proteomic samples with genomic samples by two discriminant scores. Theoretical analysis with simulation data indicates that SMAP is capable of uniquely match proteomic and genomic samples, when ≥20% genotypes of individual samples are available. When SMAP was applied to a large-scale proteomics dataset from 288 biological samples generated by the PsychENCODE BrainGVEX project, we identified and corrected 18.8% (54/288) mismatched samples. The correction was further confirmed by ribosome profiling and assay for transposase-accessible chromatin sequencing data from the same set of samples. Thus our results demonstrate that SMAP is an effective tool for sample verification in a large-scale MS-based proteogenomics study. The source code, manual, and sample data of the SMAP are publicly available at https://github.com/UND-Wanglab/SMAP, and a web-based SMAP can be accessed at https://smap.shinyapps.io/smap/.


Author(s):  
R. A. Earnshaw

AbstractWhere do new ideas come from and how are they generated? Which of these ideas will be potentially useful immediately, and which will be more ‘blue sky’? For the latter, their significance may not be known for a number of years, perhaps even generations. The progress of computing and digital media is a relevant and useful case study in this respect. Which visions of the future in the early days of computing have stood the test of time, and which have vanished without trace? Can this be used as guide for current and future areas of research and development? If one Internet year is equivalent to seven calendar years, are virtual worlds being utilized as an effective accelerator for these new ideas and their implementation and evaluation? The nature of digital media and its constituent parts such as electronic devices, sensors, images, audio, games, web pages, social media, e-books, and Internet of Things, provides a diverse environment which can be viewed as a testbed for current and future ideas. Individual disciplines utilise virtual worlds in different ways. As collaboration is often involved in such research environments, does the technology make these collaborations effective? Have the limits of disciplinary approaches been reached? The importance of interdisciplinary collaborations for the future is proposed and evaluated. The current enablers for progressing interdisciplinary collaborations are presented. The possibility for a new Renaissance between technology and the arts is discussed.


Author(s):  
Doug Downs

Abstract An important step in teaching critical reading for online civic reasoning is building teachers’ own acceptance of and comfort with screen literacies, understanding them not as alternative to gold-standard book literacies but as normative. To do so, teachers must better understand how web-based texts, and the reading of them, differ from the “classical” critical reading most teachers are used to. This article examines the “quantum” nature of web-based texts—their fundamental instability, their reader constructedness, and their nature as processes rather than objects—and relates these features to hyper-reading and other reading strategies that research shows allow engaged readers to screen-read critically.


2017 ◽  
Vol 81 (2) ◽  
Author(s):  
Liana Markelova

The present study aims to trace the evolution of public attitude towards the mentally challenged by means of the corpus-based analysis. The raw data comes from the two of the BYU corpora: Global Web-Based English (GloWbE) and Corpus of Historical American English (COHA). The former is comprised of 1.8 million web pages from 20 English-speaking countries (Davies/Fuchs 2015: 1) and provides an opportunity to research at a cross-cultural level, whereas the latter, containing 400 million words from more than 100,000 texts ranging from the 1810s to the 2000s (Davies 2012: 121), allows to carry on a diachronic research on the issue. To identify the difference in attitudes the collocational profiles of the terms denoting the mentally challenged were created. Having analysed them in terms of their semantic prosody one might conclude that there are certain semantic shifts that occurred due to the modern usage preferences and gradual change in public perception of everything strange, unusual and unique.


2020 ◽  
Author(s):  
Harriet M J Smith ◽  
Sally Andrews ◽  
Thom Baguley ◽  
Melissa Fay Colloff ◽  
Josh P Davis ◽  
...  

Unfamiliar simultaneous face matching is error prone. Reducing incorrect identification decisions will positively benefit forensic and security contexts. The absence of view-independent information in static images likely contributes to the difficulty of unfamiliar face matching. We tested whether a novel interactive viewing procedure that provides the user with 3D structural information as they rotate a facial image to different orientations would improve face matching accuracy. We tested the performance of ‘typical’ (Experiment 1) and ‘superior’ (Experiment 2) face recognisers, comparing their performance using high quality (Experiment 3) and pixelated (Experiment 4) Facebook profile images. In each trial, participants responded whether two images featured the same person with one of these images being either a static face, a video providing orientation information, or an interactive image. Taken together, the results show that fluid orientation information and interactivity prompt shifts in criterion and support matching performance. Because typical and superior face recognisers both benefited from the structural information provided by the novel viewing procedures, our results point to qualitatively similar reliance on pictorial encoding in these groups. This also suggests that interactive viewing tools can be valuable in assisting face matching in high performing practitioner groups.


Sign in / Sign up

Export Citation Format

Share Document