scholarly journals Mindcontrol: A web application for brain segmentation quality control

NeuroImage ◽  
2018 ◽  
Vol 170 ◽  
pp. 365-372 ◽  
Author(s):  
Anisha Keshavan ◽  
Esha Datta ◽  
Ian M. McDonough ◽  
Christopher R. Madan ◽  
Kesshi Jordan ◽  
...  
NeuroImage ◽  
2019 ◽  
Vol 195 ◽  
pp. 11-22 ◽  
Author(s):  
Abhijit Guha Roy ◽  
Sailesh Conjeti ◽  
Nassir Navab ◽  
Christian Wachinger

BMC Genomics ◽  
2019 ◽  
Vol 20 (S5) ◽  
Author(s):  
Patrick Perkins ◽  
Serina Mazzoni-Putman ◽  
Anna Stepanova ◽  
Jose Alonso ◽  
Steffen Heber

Author(s):  
Shikai Guo ◽  
Rong Chen ◽  
Hui Li ◽  
Jian Gao ◽  
Yaqing Liu

Crowdsourcing carried out by cyber citizens instead of hired consultants and professionals has become increasingly an appealing solution to test the feature rich and interactive web. Despite having various online crowdsourcing testing services, the benefits of exposure to a wider audience and harnessing the collective efforts of individuals remain uncertain, especially when the quality control is problematic in an open environment. The objective of this paper is to propose a real-time collaborative testing approach (RCTA) to create a productive crowdsourced testing on a dynamic Internet. We implemented a prototype crowdsourcing system XTurk, and carried out a case study, to understand the crowdsourced testers behavior, the trustworthiness, the execution time of test cases and accuracy of feedback. Several experiments are carried out and experimental results validate the quality, efficiency and reliability of the present approach and the positive testing feedback is are shown to outperform the previous methods.


2016 ◽  
Author(s):  
Anisha Keshavan ◽  
Esha Datta ◽  
Ian McDonough ◽  
Christopher R. Madan ◽  
Kesshi Jordan ◽  
...  

AbstractTissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study.


Author(s):  
Sebastian Nowak ◽  
Maike Theis ◽  
Barbara D. Wichtmann ◽  
Anton Faron ◽  
Matthias F. Froelich ◽  
...  

Abstract Objectives To develop a pipeline for automated body composition analysis and skeletal muscle assessment with integrated quality control for large-scale application in opportunistic imaging. Methods First, a convolutional neural network for extraction of a single slice at the L3/L4 lumbar level was developed on CT scans of 240 patients applying the nnU-Net framework. Second, a 2D competitive dense fully convolutional U-Net for segmentation of visceral and subcutaneous adipose tissue (VAT, SAT), skeletal muscle (SM), and subsequent determination of fatty muscle fraction (FMF) was developed on single CT slices of 1143 patients. For both steps, automated quality control was integrated by a logistic regression model classifying the presence of L3/L4 and a linear regression model predicting the segmentation quality in terms of Dice score. To evaluate the performance of the entire pipeline end-to-end, body composition metrics, and FMF were compared to manual analyses including 364 patients from two centers. Results Excellent results were observed for slice extraction (z-deviation = 2.46 ± 6.20 mm) and segmentation (Dice score for SM = 0.95 ± 0.04, VAT = 0.98 ± 0.02, SAT = 0.97 ± 0.04) on the dual-center test set excluding cases with artifacts due to metallic implants. No data were excluded for end-to-end performance analyses. With a restrictive setting of the integrated segmentation quality control, 39 of 364 patients were excluded containing 8 cases with metallic implants. This setting ensured a high agreement between manual and fully automated analyses with mean relative area deviations of ΔSM = 3.3 ± 4.1%, ΔVAT = 3.0 ± 4.7%, ΔSAT = 2.7 ± 4.3%, and ΔFMF = 4.3 ± 4.4%. Conclusions This study presents an end-to-end automated deep learning pipeline for large-scale opportunistic assessment of body composition metrics and sarcopenia biomarkers in clinical routine. Key Points • Body composition metrics and skeletal muscle quality can be opportunistically determined from routine abdominal CT scans. • A pipeline consisting of two convolutional neural networks allows an end-to-end automated analysis. • Machine-learning-based quality control ensures high agreement between manual and automatic analysis.


2020 ◽  
Vol 66 (8) ◽  
pp. 1072-1083 ◽  
Author(s):  
Andreas Bietenbeck ◽  
Mark A Cervinski ◽  
Alex Katayev ◽  
Tze Ping Loh ◽  
Huub H van Rossum ◽  
...  

Abstract Background Patient-based real-time quality control (PBRTQC) avoids limitations of traditional quality control methods based on the measurement of stabilized control samples. However, PBRTQC needs to be adapted to the individual laboratories with parameters such as algorithm, truncation, block size, and control limit. Methods In a computer simulation, biases were added to real patient results of 10 analytes with diverse properties. Different PBRTQC methods were assessed on their ability to detect these biases early. Results The simulation based on 460 000 historical patient measurements for each analyte revealed several recommendations for PBRTQC. Control limit calculation with “percentiles of daily extremes” led to effective limits and allowed specification of the percentage of days with false alarms. However, changes in measurement distribution easily increased false alarms. Box–Cox but not logarithmic transformation improved error detection. Winsorization of outlying values often led to a better performance than simple outlier removal. For medians and Harrell–Davis 50 percentile estimators (HD50s), no truncation was necessary. Block size influenced medians substantially and HD50s to a lesser extent. Conversely, a change of truncation limits affected means and exponentially moving averages more than a change of block sizes. A large spread of patient measurements impeded error detection. PBRTQC methods were not always able to detect an allowable bias within the simulated 1000 erroneous measurements. A web application was developed to estimate PBRTQC performance. Conclusions Computer simulations can optimize PBRTQC but some parameters are generally superior and can be taken as default.


Sign in / Sign up

Export Citation Format

Share Document