Retrospective evaluation of in silico prediction tools, REVEL and CADD, for supporting level evidence (PP3/BP4) of genomic variant interpretation

2021 ◽  
Vol 132 ◽  
pp. S260-S261
Author(s):  
Benjamin Edward Kang ◽  
Bryan Gall ◽  
Ezen Choo ◽  
Nina Sanapareddy ◽  
Irina Rakova ◽  
...  
2018 ◽  
Vol 11 (1) ◽  
Author(s):  
Corinna Ernst ◽  
Eric Hahnen ◽  
Christoph Engel ◽  
Michael Nothnagel ◽  
Jonas Weber ◽  
...  

PLoS ONE ◽  
2014 ◽  
Vol 9 (2) ◽  
pp. e89570 ◽  
Author(s):  
Lucie Grodecká ◽  
Pavla Lockerová ◽  
Barbora Ravčuková ◽  
Emanuele Buratti ◽  
Francisco E. Baralle ◽  
...  

Author(s):  
Adam C Gunning ◽  
Verity Fryer ◽  
James Fasham ◽  
Andrew H Crosby ◽  
Sian Ellard ◽  
...  

ABSTRACTPurposePathogenicity predictors are an integral part of genomic variant interpretation but, despite their widespread usage, an independent validation of performance using a clinically-relevant dataset has not been undertaken.MethodsWe derive two validation datasets: an “open” dataset containing variants extracted from publicly-available databases, similar to those commonly applied in previous benchmarking exercises, and a “clinically-representative” dataset containing variants identified through research/diagnostic exome and diagnostic panel sequencing. Using these datasets, we evaluate the performance of three recently developed meta-predictors, REVEL, GAVIN and ClinPred, and compare their performance against two commonly used in silico tools, SIFT and PolyPhen-2.ResultsAlthough the newer meta-predictors outperform the older tools, the performance of all pathogenicity predictors is substantially lower in the clinically-representative dataset. Using our clinically-relevant dataset, REVEL performed best with an area under the ROC of 0.81. Using a concordance-based approach based on a consensus of multiple tools reduces the performance due to both discordance between tools and false concordance where tools make common misclassification. Analysis of tool feature usage may give an insight into the tool performance and misclassification.ConclusionOur results support the adoption of meta-predictors over traditional in silico tools, but do not support a consensus-based approach as recommended by current variant classification guidelines.


Haemophilia ◽  
2015 ◽  
Vol 21 (2) ◽  
pp. 249-257 ◽  
Author(s):  
L. Martorell ◽  
I. Corrales ◽  
L. Ramirez ◽  
R. Parra ◽  
A. Raya ◽  
...  

2020 ◽  
pp. jmedgenet-2020-107003
Author(s):  
Adam C Gunning ◽  
Verity Fryer ◽  
James Fasham ◽  
Andrew H Crosby ◽  
Sian Ellard ◽  
...  

BackgroundPathogenicity predictors are integral to genomic variant interpretation but, despite their widespread usage, an independent validation of performance using a clinically relevant dataset has not been undertaken.MethodsWe derive two validation datasets: an ‘open’ dataset containing variants extracted from publicly available databases, similar to those commonly applied in previous benchmarking exercises, and a ‘clinically representative’ dataset containing variants identified through research/diagnostic exome and panel sequencing. Using these datasets, we evaluate the performance of three recent meta-predictors, REVEL, GAVIN and ClinPred, and compare their performance against two commonly used in silico tools, SIFT and PolyPhen-2.ResultsAlthough the newer meta-predictors outperform the older tools, the performance of all pathogenicity predictors is substantially lower in the clinically representative dataset. Using our clinically relevant dataset, REVEL performed best with an area under the receiver operating characteristic curve of 0.82. Using a concordance-based approach based on a consensus of multiple tools reduces the performance due to both discordance between tools and false concordance where tools make common misclassification. Analysis of tool feature usage may give an insight into the tool performance and misclassification.ConclusionOur results support the adoption of meta-predictors over traditional in silico tools, but do not support a consensus-based approach as in current practice.


2013 ◽  
Vol 34 (5) ◽  
pp. 725-734 ◽  
Author(s):  
Robert Radloff ◽  
Alain Gras ◽  
Ulrich M. Zanger ◽  
Cécile Masquelier ◽  
Karthik Arumugam ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document