scholarly journals DIA-NN: Deep neural networks substantially improve the identification performance of Data-independent acquisition (DIA) in proteomics

2018 ◽  
Author(s):  
Vadim Demichev ◽  
Christoph B. Messner ◽  
Kathryn S. Lilley ◽  
Markus Ralser

AbstractData-independent acquisition (DIA-MS) strategies, like SWATH-MS, have been developed to increase consistency, quantification precision and proteomic depth in label-free proteomic experiments. They aim to overcome stochasticity in the selection of precursor ions by utilising (mass-) windowed acquisition that is followed by computational reconstruction of the chromatograms. While DIA methods increasingly outperform typical data-dependent methods in identification consistency and precision specifically on large sample series, possibilities remain for further improvements. At present, only a fraction of the information recorded in the complex DIA spectra is extracted by the software analysis pipelines. Here we present a software tool (DIA-NN) that introduces artificial neural nets and a new quantification strategy to enhance signal processing in DIA-data. DIA-NN greatly improves identification of precursor ions and, as a consequence, protein quantification accuracy. The performance of DIA-NN demonstrates that deep learning provides opportunities to boost the analysis of data-independent acquisition workflows in proteomics.

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Mukul K. Midha ◽  
David S. Campbell ◽  
Charu Kapil ◽  
Ulrike Kusebauch ◽  
Michael R. Hoopmann ◽  
...  

Abstract Data-independent acquisition (DIA) mass spectrometry, also known as Sequential Window Acquisition of all Theoretical Mass Spectra (SWATH), is a popular label-free proteomics strategy to comprehensively quantify peptides/proteins utilizing mass spectral libraries to decipher inherently multiplexed spectra collected linearly across a mass range. Although there are many spectral libraries produced worldwide, the quality control of these libraries is lacking. We present the DIALib-QC (DIA library quality control) software tool for the systematic evaluation of a library’s characteristics, completeness and correctness across 62 parameters of compliance, and further provide the option to improve its quality. We demonstrate its utility in assessing and repairing spectral libraries for correctness, accuracy and sensitivity.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Christoph N Schlaffner ◽  
Konstantin Kahnert ◽  
Jan Muntel ◽  
Ruchi Chauhan ◽  
Bernhard Y Renard ◽  
...  

Improvements in LC-MS/MS methods and technology have enabled the identification of thousands of modified peptides in a single experiment. However, protein regulation by post-translational modifications (PTMs) is not binary, making methods to quantify the modification extent crucial to understanding the role of PTMs. Here, we introduce FLEXIQuant-LF, a software tool for large-scale identification of differentially modified peptides and quantification of their modification extent without knowledge of the types of modifications involved. We developed FLEXIQuant-LF using label-free quantification of unmodified peptides and robust linear regression to quantify the modification extent of peptides. As proof of concept, we applied FLEXIQuant-LF to data-independent-acquisition (DIA) data of the anaphase promoting complex/cyclosome (APC/C) during mitosis. The unbiased FLEXIQuant-LF approach to assess the modification extent in quantitative proteomics data provides a better understanding of the function and regulation of PTMs. The software is available at https://github.com/SteenOmicsLab/FLEXIQuantLF.


2017 ◽  
Author(s):  
Jesse G. Meyer ◽  
Sushanth Mukkamalla ◽  
Alexandria K. D’Souza ◽  
Alexey I. Nesvizhskii ◽  
Bradford W. Gibson ◽  
...  

Label-free quantification using data-independent acquisition (DIA) is a robust method for deep and accurate proteome quantification1,2. However, when lacking a pre-existing spectral library, as is often the case with studies of novel post-translational modifications (PTMs), samples are typically analyzed several times: one or more data dependent acquisitions (DDA) are used to generate a spectral library followed by DIA for quantification. This type of multi-injection analysis results in significant cost with regard to sample consumption and instrument time for each new PTM study, and may not be possible when sample amount is limiting and/or studies require a large number of biological replicates. Recently developed software (e.g. DIA-Umpire) has enabled combined peptide identification and quantification from a data-independent acquisition without any pre-existing spectral library3,4. Still, these tools are designed for protein level quantification. Here we demonstrate a software tool and workflow that extends DIA-Umpire to allow automated identification and quantification of PTM peptides from DIA. We accomplish this using a custom, open-source graphical user interface DIA-Pipe (https://github.com/jgmeyerucsd/PIQEDia/releases/tag/v0.1.2) (figure 1a).


2019 ◽  
Vol 200 ◽  
pp. 51-59 ◽  
Author(s):  
Bing He ◽  
Jian Shi ◽  
Xinwen Wang ◽  
Hui Jiang ◽  
Hao-Jie Zhu

2020 ◽  
Vol 36 (8) ◽  
pp. 2611-2613 ◽  
Author(s):  
Thang V Pham ◽  
Alex A Henneman ◽  
Connie R Jimenez

Abstract Summary We present an R package called iq to enable accurate protein quantification for label-free data-independent acquisition (DIA) mass spectrometry-based proteomics, a recently developed global approach with superior quantitative consistency. We implement the popular maximal peptide ratio extraction module of the MaxLFQ algorithm, so far only applicable to data-dependent acquisition mode using the software suite MaxQuant. Moreover, our implementation shows, for each protein separately, the validity of quantification over all samples. Hence, iq exports a state-of-the-art protein quantification algorithm to the emerging DIA mode in an open-source implementation. Availability and implementation The open-source R package is available on CRAN, https://github.com/tvpham/iq/releases and oncoproteomics.nl/iq. Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Author(s):  
Konstantin Kahnert ◽  
Christoph N. Schlaffner ◽  
Jan Muntel ◽  
Ruchi Chauhan ◽  
Bernhard Y. Renard ◽  
...  

AbstractImprovements in LC-MS/MS methods and technology have enabled the identification of thousands of modified peptides in a single experiment. However, protein regulation by post-translational modifications (PTMs) is not binary, making methods to quantify the modification extent crucial to understanding the role of PTMs. Here, we introduce FLEXIQuant-LF, a software tool for large-scale identification of differentially modified peptides and quantification of their modification extent without prior knowledge of the type of modification. We developed FLEXIQuant-LF using label-free quantification of unmodified peptides and robust linear regression to quantify the modification extent of peptides. As proof of concept, we applied FLEXIQuant-LF to data-independent-acquisition (DIA) data of the anaphase promoting complex/cyclosome (APC/C) during mitosis. The unbiased FLEXIQuant-LF approach to assess the modification extent in quantitative proteomics data provides a better understanding of the function and regulation of PTMs. The software is available at https://github.com/SteenOmicsLab/FLEXIQuantLF.


2021 ◽  
Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined with respect to their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality and the level of detail can be controlled by the automated choice of transformation parameters. We present a software tool in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details and the creation of UV maps. Flexibility, transformation quality and time savings are described and discussed.


Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined concerning their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process, it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality, and the level of detail can be controlled by the automated choice of transformation parameters. Through this approach, tedious preparation tasks and iterative performance optimization can be avoided in the future, which also simplifies the integration of AR/VR applications into product development and use. A software tool is presented in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details, and the creation of UV maps. Flexibility, transformation quality, and timesavings are described and discussed.


Sign in / Sign up

Export Citation Format

Share Document