Effect of normalization methods on rank performance in single valued m-polar fuzzy ELECTRE-I algorithm

Author(s):  
Madan Jagtap ◽  
Prasad Karande
Metabolites ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 8
Author(s):  
Michiel Bongaerts ◽  
Ramon Bonte ◽  
Serwet Demirdas ◽  
Edwin H. Jacobs ◽  
Esmee Oussoren ◽  
...  

Untargeted metabolomics is an emerging technology in the laboratory diagnosis of inborn errors of metabolism (IEM). Analysis of a large number of reference samples is crucial for correcting variations in metabolite concentrations that result from factors, such as diet, age, and gender in order to judge whether metabolite levels are abnormal. However, a large number of reference samples requires the use of out-of-batch samples, which is hampered by the semi-quantitative nature of untargeted metabolomics data, i.e., technical variations between batches. Methods to merge and accurately normalize data from multiple batches are urgently needed. Based on six metrics, we compared the existing normalization methods on their ability to reduce the batch effects from nine independently processed batches. Many of those showed marginal performances, which motivated us to develop Metchalizer, a normalization method that uses 10 stable isotope-labeled internal standards and a mixed effect model. In addition, we propose a regression model with age and sex as covariates fitted on reference samples that were obtained from all nine batches. Metchalizer applied on log-transformed data showed the most promising performance on batch effect removal, as well as in the detection of 195 known biomarkers across 49 IEM patient samples and performed at least similar to an approach utilizing 15 within-batch reference samples. Furthermore, our regression model indicates that 6.5–37% of the considered features showed significant age-dependent variations. Our comprehensive comparison of normalization methods showed that our Log-Metchalizer approach enables the use out-of-batch reference samples to establish clinically-relevant reference values for metabolite concentrations. These findings open the possibilities to use large scale out-of-batch reference samples in a clinical setting, increasing the throughput and detection accuracy.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Li Tong ◽  
◽  
Po-Yen Wu ◽  
John H. Phan ◽  
Hamid R. Hassazadeh ◽  
...  

Abstract To use next-generation sequencing technology such as RNA-seq for medical and health applications, choosing proper analysis methods for biomarker identification remains a critical challenge for most users. The US Food and Drug Administration (FDA) has led the Sequencing Quality Control (SEQC) project to conduct a comprehensive investigation of 278 representative RNA-seq data analysis pipelines consisting of 13 sequence mapping, three quantification, and seven normalization methods. In this article, we focused on the impact of the joint effects of RNA-seq pipelines on gene expression estimation as well as the downstream prediction of disease outcomes. First, we developed and applied three metrics (i.e., accuracy, precision, and reliability) to quantitatively evaluate each pipeline’s performance on gene expression estimation. We then investigated the correlation between the proposed metrics and the downstream prediction performance using two real-world cancer datasets (i.e., SEQC neuroblastoma dataset and the NIH/NCI TCGA lung adenocarcinoma dataset). We found that RNA-seq pipeline components jointly and significantly impacted the accuracy of gene expression estimation, and its impact was extended to the downstream prediction of these cancer outcomes. Specifically, RNA-seq pipelines that produced more accurate, precise, and reliable gene expression estimation tended to perform better in the prediction of disease outcome. In the end, we provided scenarios as guidelines for users to use these three metrics to select sensible RNA-seq pipelines for the improved accuracy, precision, and reliability of gene expression estimation, which lead to the improved downstream gene expression-based prediction of disease outcome.


2012 ◽  
Vol 382 (1-2) ◽  
pp. 211-215 ◽  
Author(s):  
Morgan A. Marks ◽  
Yolanda Eby ◽  
Roslyn Howard ◽  
Patti E. Gravitt

2014 ◽  
Vol 93 ◽  
pp. 3-16 ◽  
Author(s):  
Patrick Giraudeau ◽  
Illa Tea ◽  
Gérald S. Remaud ◽  
Serge Akoka

2012 ◽  
Vol 6 (1) ◽  
pp. 063578-1 ◽  
Author(s):  
Nisha Bao ◽  
Alex M. Lechner ◽  
Andrew Fletcher ◽  
Andrew Mellor ◽  
David Mulligan ◽  
...  

2019 ◽  
Vol 14 (02) ◽  
pp. 2050006
Author(s):  
Ia Shengelia ◽  
Nato Jorjiashvili ◽  
Tea Godoladze ◽  
Zurab Javakhishvili ◽  
Nino Tumanova

Three hundred and thirty-five local earthquakes were processed and the attenuation properties of the crust in the Racha region were investigated using the records of seven seismic stations. We have estimated the quality factors of coda waves ([Formula: see text]) and the direct [Formula: see text] waves ([Formula: see text]) by the single back scattering model and the coda normalization methods, respectively. The Wennerberg’s method has been used to estimate relative contribution of intrinsic ([Formula: see text]) and scattering ([Formula: see text]) attenuations in the total attenuation. We have found that [Formula: see text] and [Formula: see text] parameters are frequency-dependent in the frequency range of 1.5–24[Formula: see text]Hz. [Formula: see text] values increase both with respect to lapse time window from 20[Formula: see text]s to 60[Formula: see text]s and frequency. [Formula: see text] and [Formula: see text] parameters are nearly similar for all frequency bands, but are smaller than [Formula: see text]. The obtained results show that the intrinsic attenuation has more significant effect than scattering attenuation in the total attenuation. The increase of [Formula: see text] with lapse time shows that the lithosphere becomes more homogeneous with depth.


Sign in / Sign up

Export Citation Format

Share Document