scholarly journals Role of brain imaging in disorders of brain–gut interaction: a Rome Working Team Report

Gut ◽  
2019 ◽  
Vol 68 (9) ◽  
pp. 1701-1715 ◽  
Author(s):  
Emeran A Mayer ◽  
Jennifer Labus ◽  
Qasim Aziz ◽  
Irene Tracey ◽  
Lisa Kilpatrick ◽  
...  

Imaging of the living human brain is a powerful tool to probe the interactions between brain, gut and microbiome in health and in disorders of brain–gut interactions, in particular IBS. While altered signals from the viscera contribute to clinical symptoms, the brain integrates these interoceptive signals with emotional, cognitive and memory related inputs in a non-linear fashion to produce symptoms. Tremendous progress has occurred in the development of new imaging techniques that look at structural, functional and metabolic properties of brain regions and networks. Standardisation in image acquisition and advances in computational approaches has made it possible to study large data sets of imaging studies, identify network properties and integrate them with non-imaging data. These approaches are beginning to generate brain signatures in IBS that share some features with those obtained in other often overlapping chronic pain disorders such as urological pelvic pain syndromes and vulvodynia, suggesting shared mechanisms. Despite this progress, the identification of preclinical vulnerability factors and outcome predictors has been slow. To overcome current obstacles, the creation of consortia and the generation of standardised multisite repositories for brain imaging and metadata from multisite studies are required.

2021 ◽  
Vol 15 ◽  
Author(s):  
Laura Tomaz Da Silva ◽  
Nathalia Bianchini Esper ◽  
Duncan D. Ruiz ◽  
Felipe Meneguzzi ◽  
Augusto Buchweitz

Problem: Brain imaging studies of mental health and neurodevelopmental disorders have recently included machine learning approaches to identify patients based solely on their brain activation. The goal is to identify brain-related features that generalize from smaller samples of data to larger ones; in the case of neurodevelopmental disorders, finding these patterns can help understand differences in brain function and development that underpin early signs of risk for developmental dyslexia. The success of machine learning classification algorithms on neurofunctional data has been limited to typically homogeneous data sets of few dozens of participants. More recently, larger brain imaging data sets have allowed for deep learning techniques to classify brain states and clinical groups solely from neurofunctional features. Indeed, deep learning techniques can provide helpful tools for classification in healthcare applications, including classification of structural 3D brain images. The adoption of deep learning approaches allows for incremental improvements in classification performance of larger functional brain imaging data sets, but still lacks diagnostic insights about the underlying brain mechanisms associated with disorders; moreover, a related challenge involves providing more clinically-relevant explanations from the neural features that inform classification.Methods: We target this challenge by leveraging two network visualization techniques in convolutional neural network layers responsible for learning high-level features. Using such techniques, we are able to provide meaningful images for expert-backed insights into the condition being classified. We address this challenge using a dataset that includes children diagnosed with developmental dyslexia, and typical reader children.Results: Our results show accurate classification of developmental dyslexia (94.8%) from the brain imaging alone, while providing automatic visualizations of the features involved that match contemporary neuroscientific knowledge (brain regions involved in the reading process for the dyslexic reader group and brain regions associated with strategic control and attention processes for the typical reader group).Conclusions: Our visual explanations of deep learning models turn the accurate yet opaque conclusions from the models into evidence to the condition being studied.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Michele Allegra ◽  
Elena Facco ◽  
Francesco Denti ◽  
Alessandro Laio ◽  
Antonietta Mira

Abstract One of the founding paradigms of machine learning is that a small number of variables is often sufficient to describe high-dimensional data. The minimum number of variables required is called the intrinsic dimension (ID) of the data. Contrary to common intuition, there are cases where the ID varies within the same data set. This fact has been highlighted in technical discussions, but seldom exploited to analyze large data sets and obtain insight into their structure. Here we develop a robust approach to discriminate regions with different local IDs and segment the points accordingly. Our approach is computationally efficient and can be proficiently used even on large data sets. We find that many real-world data sets contain regions with widely heterogeneous dimensions. These regions host points differing in core properties: folded versus unfolded configurations in a protein molecular dynamics trajectory, active versus non-active regions in brain imaging data, and firms with different financial risk in company balance sheets. A simple topological feature, the local ID, is thus sufficient to achieve an unsupervised segmentation of high-dimensional data, complementary to the one given by clustering algorithms.


2021 ◽  
Author(s):  
Karl Magtibay

Ventricular Fibrillation (VF) has been described as seemingly random activations on the ventricles of the mammalian heart and is one of the causes of Sudden Cardiac Deaths (SCD). Medical imaging techniques, such as Magnetic Resonance Imaging (MRI), could potentially provide a better way of collecting data and understanding the true nature of VF than the techniques that are currently being employed. In addition, as there is a wide variety of MR techniques, fusing and jointly analyzing complementary data sets could also prove beneficial in providing parameters that are informative in studying VF and are otherwise unobservable by inspection. In this thesis, the author explores the quantification of the combination of MRI techniques, Current Density Imaging (CDI) and Diffusion Tensor Imaging (DTI), as novel tools for studying VF. This was accomplished by performing two feature-based data fusion techniques, Joint Independent Component Analysis (jICA) and Canonical Correlation Analysis (CCA). Using 12 imaging data sets from 10 live porcine heart experiments, both data fusion techniques provided unique ways from which the variations of CDI and DTI data sets were used to distinguish cardiac states. The results obtained by the jICA approach demonstrated discrimination between VF and non-VF subjects (p = 0:020) using the jICA loadings with evidence of a significant increase in the mutual information post fusion. For the CCA approach, using the pairwise mixing profiles, we observed discrimination between VF and non-VF subjects (p = 0:023) with a 7.25% increase in average correlation between the modalities, post fusion. The results of the study demonstrate that the fusion of CDI and DTI data sets captures and enhances the variations in electrical current pathways in relation to a myocardial structure that are unique to a cardiac state, such as VF. This study serves as a strong precursor for exploring MRI and data fusion techniques in studying VF. Such a study could provide greater insights on VF characteristics inspiring better treatment options for patients vulnerable to VF.


2020 ◽  
Vol 10 (9) ◽  
pp. 578
Author(s):  
Lauren C. Smith ◽  
Adam Kimbrough

Alcohol use disorder is a pervasive healthcare issue with significant socioeconomic consequences. There is a plethora of neural imaging techniques available at the clinical and preclinical level, including magnetic resonance imaging and three-dimensional (3D) tissue imaging techniques. Network-based approaches can be applied to imaging data to create neural networks that model the functional and structural connectivity of the brain. These networks can be used to changes to brain-wide neural signaling caused by brain states associated with alcohol use. Neural networks can be further used to identify key brain regions or neural “hubs” involved in alcohol drinking. Here, we briefly review the current imaging and neurocircuit manipulation methods. Then, we discuss clinical and preclinical studies using network-based approaches related to substance use disorders and alcohol drinking. Finally, we discuss how preclinical 3D imaging in combination with network approaches can be applied alone and in combination with other approaches to better understand alcohol drinking.


2019 ◽  
Vol 92 (1101) ◽  
pp. 20180910 ◽  
Author(s):  
Ashley N. Anderson ◽  
Jace B. King ◽  
Jeffrey S Anderson

Neuroimaging has been a dominant force in guiding research into psychiatric and neurodevelopmental disorders for decades, yet researchers have been unable to formulate sensitive or specific imaging tests for these conditions. The search for neuroimaging biomarkers has been constrained by limited reproducibility of imaging techniques, limited tools for evaluating neurochemistry, heterogeneity of patient populations not defined by brain-based phenotypes, limited exploration of temporal components of brain function, and relatively few studies evaluating developmental and longitudinal trajectories of brain function. Opportunities for development of clinically impactful imaging metrics include longer duration functional imaging data sets, new engineering approaches to mitigate suboptimal spatiotemporal resolution, improvements in image post-processing and analysis strategies, big data approaches combined with data sharing of multisite imaging samples, and new techniques that allow dynamical exploration of brain function across multiple timescales. Despite narrow clinical impact of neuroimaging methods, there is reason for optimism that imaging will contribute to diagnosis, prognosis, and treatment monitoring for psychiatric and neurodevelopmental disorders in the near future.


2009 ◽  
Vol 21 (S2) ◽  
pp. 65-66
Author(s):  
Umberto Volpe

Abstract:Electroencephalography has probably represented the first modern and scientifically sound attempt to functionally explore the in vivo activity of the human brain and it has, since ever, attracted attention of psychiatrists, from both the clinical and the research viewpoint.Probably due to the limitations implied by their traditional low spatial resolution, the use of psychophysiological techniques in psychiatry has been not continuous over the last century; however, the availability of newer EEG-based brain imaging techniques has recently renovated some interest (1)). Furthermore, recent theories proposed that psychopathology may result from the failure to integrate the activity of different areas involved in cognitive processes, rather than from the impairment of one or more brain areas (2)); within this view, a reliable brain imaging tool should be able to explore the dynamics of complex interactions among brain regions, with high sensitivity to the subtle deviation in complex processes that last fractions of seconds; psychophysiological techniques, indeed, offer the possibility to explore the functional correlates of major psychiatric illnesses, as well as to understand of the effects of psychotropic drugs on the central nervous system, with incomparable time resolution. Finally, the recent technical possibility to combine different brain imaging approaches has further fostered a renovated enthusiasm to ward the use of EEG-based techniques in psychiatry.This contribution will provide an historical overview of the EEG-based brain imaging techniques and an update on some recent advances concerning the use of such techniques within the psychiatric field. Finally, some examples of psychophysiological and ''multimodal'' imaging investigations in subjects with different psychiatric conditions will be provided.


2006 ◽  
Vol 2 (14) ◽  
pp. 592-592
Author(s):  
Paresh Prema ◽  
Nicholas A. Walton ◽  
Richard G. McMahon

Observational astronomy is entering an exciting new era with large surveys delivering deep multi-wavelength data over a wide range of the electromagnetic spectrum. The last ten years has seen a growth in the study of high redshift galaxies discovered with the method pioneered by Steidel et al. (1995) used to identify galaxies above z>1. The technique is designed to take advantage of the multi-wavelength data now available for astronomers that can extend from X-rays to radio wavelength. The technique is fast becoming a useful way to study large samples of objects at these high redshifts and we are currently designing and implementing an automated technique to study these samples of objects. However, large surveys produce large data sets that have now reached terabytes (e.g. for the Sloan Digital Sky Survey, <http://www.sdss.org>) in size and petabytes over the next 10yr (e.g., LSST, <http://www.lsst.org>). The Virtual Observatory is now providing a means to deal with this issue and users are now able to access many data sets in a quicker more useful form.


2017 ◽  
Vol 24 (1) ◽  
pp. 84-96 ◽  
Author(s):  
Luke A. Henderson ◽  
Kevin A. Keay

While acute pain serves as a protective mechanism designed to warn an individual of potential or actual damaging stimuli, chronic pain provides no benefit and is now considered a disease in its own right. Since the advent of human brain imaging techniques, many investigations that have explored the central representation of acute and chronic pain have focused on changes in higher order brain regions. In contrast, far fewer have explored brainstem and spinal cord function, mainly due to significant technical difficulties. In this review, we present some of the recent human brain imaging studies that have specifically explored brainstem and spinal cord function during acute noxious stimuli and in individuals with chronic pain. We focus particularly on investigations that explore changes in areas that receive nociceptor afferents and compare humans and experimental animal data in an attempt to describe both microscopic and macroscopic changes associated with acute and chronic pain.


2019 ◽  
Author(s):  
Noah Lewis ◽  
Harshvardhan Gazula ◽  
Sergey M. Plis ◽  
Vince D. Calhoun

Abstract0.1backgroundIn this age of big data, large data stores allow researchers to compose robust models that are accurate and informative. In many cases, the data are stored in separate locations requiring data transfer between local sites, which can cause various practical hurdles, such as privacy concerns or heavy network load. This is especially true for medical imaging data, which can be constrained due to the health insurance portability and accountability act (HIPAA). Medical imaging datasets can also contain many thousands or millions of features, requiring heavy network load.0.2New MethodOur research expands upon current decentralized classification research by implementing a new singleshot method for both neural networks and support vector machines. Our approach is to estimate the statistical distribution of the data at each local site and pass this information to the other local sites where each site resamples from the individual distributions and trains a model on both locally available data and the resampled data.0.3ResultsWe show applications of our approach to handwritten digit classification as well as to multi-subject classification of brain imaging data collected from patients with schizophrenia and healthy controls. Overall, the results showed comparable classification accuracy to the centralized model with lower network load than multishot methods.0.4Comparison with Existing MethodsMany decentralized classifiers are multishot, requiring heavy network traffic. Our model attempts to alleviate this load while preserving prediction accuracy.0.5ConclusionsWe show that our proposed approach performs comparably to a centralized approach while minimizing network traffic compared to multishot methods.0.6HighlightsA novel yet simple approach to decentralized classificationReduces total network load compared to current multishot algorithmsMaintains a prediction accuracy comparable to the centralized approach


2021 ◽  
Author(s):  
Christoph von Hagke

&lt;p&gt;Understanding the formation of mountain belts requires integrating quantitative insights on multiple scales. While this has long been known, it is now possible to enlarge the scales of observation by exploiting global data sets, making use of data sets covering large regions, or including automated data analysis. At the same time the lower limit of observation is pushed farther, and by now structures can be routinely analyzed at the micro- or even nano-scale over large areas making use of digital imaging techniques.&lt;/p&gt;&lt;p&gt;In this talk I will present results from a variety of geological settings illustrating the use of large data sets for better understanding of mountain belt dynamics. To this end, I will integrate micro-structural work, numerical and analog models, and regional studies of fault geometries and their time evolution constrained by digital field techniques and low-temperature thermochronometry. A particular focus will be laid on the role of mechanical heterogeneity and strain localization through time. It is shown that in some regions geodynamic processes are responsible for local fault geometries, while in others much more local factors such as rheological contrasts of individual layers or even the changes of rheology through time plays a major role. Multiscale studies exploiting digital techniques and including the dimension of time provide an exciting avenue for state of the art and future geological studies.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document