scholarly journals Numerical uncertainty in analytical pipelines lead to impactful variability in brain networks

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0250755
Author(s):  
Gregory Kiar ◽  
Yohan Chatelain ◽  
Pablo de Oliveira Castro ◽  
Eric Petit ◽  
Ariel Rokem ◽  
...  

The analysis of brain-imaging data requires complex processing pipelines to support findings on brain function or pathologies. Recent work has shown that variability in analytical decisions, small amounts of noise, or computational environments can lead to substantial differences in the results, endangering the trust in conclusions. We explored the instability of results by instrumenting a structural connectome estimation pipeline with Monte Carlo Arithmetic to introduce random noise throughout. We evaluated the reliability of the connectomes, the robustness of their features, and the eventual impact on analysis. The stability of results was found to range from perfectly stable (i.e. all digits of data significant) to highly unstable (i.e. 0 − 1 significant digits). This paper highlights the potential of leveraging induced variance in estimates of brain connectivity to reduce the bias in networks without compromising reliability, alongside increasing the robustness and potential upper-bound of their applications in the classification of individual differences. We demonstrate that stability evaluations are necessary for understanding error inherent to brain imaging experiments, and how numerical analysis can be applied to typical analytical workflows both in brain imaging and other domains of computational sciences, as the techniques used were data and context agnostic and globally relevant. Overall, while the extreme variability in results due to analytical instabilities could severely hamper our understanding of brain organization, it also affords us the opportunity to increase the robustness of findings.

2020 ◽  
Author(s):  
Gregory Kiar ◽  
Yohan Chatelain ◽  
Pablo de Oliveira Castro ◽  
Eric Petit ◽  
Ariel Rokem ◽  
...  

AbstractThe analysis of brain-imaging data requires complex processing pipelines to support findings on brain function or pathologies. Recent work has shown that variability in analytical decisions can lead to substantial differences in the results, endangering the trust in conclusions1–7. We explored the instability of results by instrumenting a connectome estimation pipeline with Monte Carlo Arithmetic8,9 to introduce random noise throughout. We evaluated the reliability of the connectomes, their features10,11, and the impact on analysis12,13. The stability of results was found to range from perfectly stable to highly unstable. This paper highlights the potential of leveraging induced variance in estimates of brain connectivity to reduce the bias in networks alongside increasing the robustness of their applications in the classification of individual differences. We demonstrate that stability evaluations are necessary for understanding error inherent to scientific computing, and how numerical analysis can be applied to typical analytical workflows. Overall, while the extreme variability in results due to analytical instabilities could severely hamper our understanding of brain organization, it also leads to an increase in the reliability of datasets.


Author(s):  
Tewodros Mulugeta Dagnew ◽  
Letizia Squarcina ◽  
Massimo W. Rivolta ◽  
Paolo Brambilla ◽  
Roberto Sassi

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Min Song ◽  
Minseok Kang ◽  
Hyeonsu Lee ◽  
Yong Jeong ◽  
Se-Bum Paik

2021 ◽  
Vol 15 ◽  
Author(s):  
Laura Tomaz Da Silva ◽  
Nathalia Bianchini Esper ◽  
Duncan D. Ruiz ◽  
Felipe Meneguzzi ◽  
Augusto Buchweitz

Problem: Brain imaging studies of mental health and neurodevelopmental disorders have recently included machine learning approaches to identify patients based solely on their brain activation. The goal is to identify brain-related features that generalize from smaller samples of data to larger ones; in the case of neurodevelopmental disorders, finding these patterns can help understand differences in brain function and development that underpin early signs of risk for developmental dyslexia. The success of machine learning classification algorithms on neurofunctional data has been limited to typically homogeneous data sets of few dozens of participants. More recently, larger brain imaging data sets have allowed for deep learning techniques to classify brain states and clinical groups solely from neurofunctional features. Indeed, deep learning techniques can provide helpful tools for classification in healthcare applications, including classification of structural 3D brain images. The adoption of deep learning approaches allows for incremental improvements in classification performance of larger functional brain imaging data sets, but still lacks diagnostic insights about the underlying brain mechanisms associated with disorders; moreover, a related challenge involves providing more clinically-relevant explanations from the neural features that inform classification.Methods: We target this challenge by leveraging two network visualization techniques in convolutional neural network layers responsible for learning high-level features. Using such techniques, we are able to provide meaningful images for expert-backed insights into the condition being classified. We address this challenge using a dataset that includes children diagnosed with developmental dyslexia, and typical reader children.Results: Our results show accurate classification of developmental dyslexia (94.8%) from the brain imaging alone, while providing automatic visualizations of the features involved that match contemporary neuroscientific knowledge (brain regions involved in the reading process for the dyslexic reader group and brain regions associated with strategic control and attention processes for the typical reader group).Conclusions: Our visual explanations of deep learning models turn the accurate yet opaque conclusions from the models into evidence to the condition being studied.


Author(s):  
Annamária Szenkovits ◽  
Regina Meszlényi ◽  
Krisztian Buza ◽  
Noémi Gaskó ◽  
Rodica Ioana Lung ◽  
...  

2021 ◽  
Vol 14 ◽  
Author(s):  
Carlo Mengucci ◽  
Daniel Remondini ◽  
Gastone Castellani ◽  
Enrico Giampieri

WISDoM (Wishart Distributed Matrices) is a framework for the quantification of deviation of symmetric positive-definite matrices associated with experimental samples, such as covariance or correlation matrices, from expected ones governed by the Wishart distribution. WISDoM can be applied to tasks of supervised learning, like classification, in particular when such matrices are generated by data of different dimensionality (e.g., time series with same number of variables but different time sampling). We show the application of the method in two different scenarios. The first is the ranking of features associated with electro encephalogram (EEG) data with a time series design, providing a theoretically sound approach for this type of studies. The second is the classification of autistic subjects of the Autism Brain Imaging Data Exchange study using brain connectivity measurements.


2021 ◽  
Author(s):  
Elise Bannier ◽  
Gareth Barker ◽  
Valentina Borghesani ◽  
Nils Broeckx ◽  
Patricia Clement ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document