scholarly journals Single-participant structural similarity matrices lead to greater accuracy in classification of participants than function in autism in MRI

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Matthew J. Leming ◽  
Simon Baron-Cohen ◽  
John Suckling

Abstract Background Autism has previously been characterized by both structural and functional differences in brain connectivity. However, while the literature on single-subject derivations of functional connectivity is extensively developed, similar methods of structural connectivity or similarity derivation from T1 MRI are less studied. Methods We introduce a technique of deriving symmetric similarity matrices from regional histograms of grey matter volumes estimated from T1-weighted MRIs. We then validated the technique by inputting the similarity matrices into a convolutional neural network (CNN) to classify between participants with autism and age-, motion-, and intracranial-volume-matched controls from six different databases (29,288 total connectomes, mean age = 30.72, range 0.42–78.00, including 1555 subjects with autism). We compared this method to similar classifications of the same participants using fMRI connectivity matrices as well as univariate estimates of grey matter volumes. We further applied graph-theoretical metrics on output class activation maps to identify areas of the matrices that the CNN preferentially used to make the classification, focusing particularly on hubs. Limitations While this study used a large sample size, the majority of data was from a young age group; furthermore, to make a viable machine learning study, we treated autism, a highly heterogeneous condition, as a binary label. Thus, these results are not necessarily generalizable to all subtypes and age groups in autism. Results Our models gave AUROCs of 0.7298 (69.71% accuracy) when classifying by only structural similarity, 0.6964 (67.72% accuracy) when classifying by only functional connectivity, and 0.7037 (66.43% accuracy) when classifying by univariate grey matter volumes. Combining structural similarity and functional connectivity gave an AUROC of 0.7354 (69.40% accuracy). Analysis of classification performance across age revealed the greatest accuracy in adolescents, in which most data were present. Graph analysis of class activation maps revealed no distinguishable network patterns for functional inputs, but did reveal localized differences between groups in bilateral Heschl’s gyrus and upper vermis for structural similarity. Conclusion This study provides a simple means of feature extraction for inputting large numbers of structural MRIs into machine learning models. Our methods revealed a unique emphasis of the deep learning model on the structure of the bilateral Heschl’s gyrus when characterizing autism.

2015 ◽  
Vol 25 (03) ◽  
pp. 1550007 ◽  
Author(s):  
Darya Chyzhyk ◽  
Manuel Graña ◽  
Döst Öngür ◽  
Ann K. Shinn

Auditory hallucinations (AH) are a symptom that is most often associated with schizophrenia, but patients with other neuropsychiatric conditions, and even a small percentage of healthy individuals, may also experience AH. Elucidating the neural mechanisms underlying AH in schizophrenia may offer insight into the pathophysiology associated with AH more broadly across multiple neuropsychiatric disease conditions. In this paper, we address the problem of classifying schizophrenia patients with and without a history of AH, and healthy control (HC) subjects. To this end, we performed feature extraction from resting state functional magnetic resonance imaging (rsfMRI) data and applied machine learning classifiers, testing two kinds of neuroimaging features: (a) functional connectivity (FC) measures computed by lattice auto-associative memories (LAAM), and (b) local activity (LA) measures, including regional homogeneity (ReHo) and fractional amplitude of low frequency fluctuations (fALFF). We show that it is possible to perform classification within each pair of subject groups with high accuracy. Discrimination between patients with and without lifetime AH was highest, while discrimination between schizophrenia patients and HC participants was worst, suggesting that classification according to the symptom dimension of AH may be more valid than discrimination on the basis of traditional diagnostic categories. FC measures seeded in right Heschl's gyrus (RHG) consistently showed stronger discriminative power than those seeded in left Heschl's gyrus (LHG), a finding that appears to support AH models focusing on right hemisphere abnormalities. The cortical brain localizations derived from the features with strong classification performance are consistent with proposed AH models, and include left inferior frontal gyrus (IFG), parahippocampal gyri, the cingulate cortex, as well as several temporal and prefrontal cortical brain regions. Overall, the observed findings suggest that computational intelligence approaches can provide robust tools for uncovering subtleties in complex neuroimaging data, and have the potential to advance the search for more neuroscience-based criteria for classifying mental illness in psychiatry research.


2013 ◽  
Vol 143 (2-3) ◽  
pp. 260-268 ◽  
Author(s):  
Ann K. Shinn ◽  
Justin T. Baker ◽  
Bruce M. Cohen ◽  
Dost Öngür

2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Lv Han ◽  
Zhao Pengfei ◽  
Liu Chunli ◽  
Wang Zhaodi ◽  
Wang Xindi ◽  
...  

Abstract To determine the neural mechanism underlying the effects of sound therapy on tinnitus, we hypothesize that sound therapy may be effective by modulating both local neural activity and functional connectivity that is associated with auditory perception, auditory information storage or emotional processing. In this prospective observational study, 30 tinnitus patients underwent resting-state functional magnetic resonance imaging scans at baseline and after 12 weeks of sound therapy. Thirty-two age- and gender-matched healthy controls also underwent two scans over a 12-week interval; 30 of these healthy controls were enrolled for data analysis. The amplitude of low-frequency fluctuation was analysed, and seed-based functional connectivity measures were shown to significantly alter spontaneous local brain activity and its connections to other brain regions. Interaction effects between the two groups and the two scans in local neural activity as assessed by the amplitude of low-frequency fluctuation were observed in the left parahippocampal gyrus and the right Heschl's gyrus. Importantly, local functional activity in the left parahippocampal gyrus in the patient group was significantly higher than that in the healthy controls at baseline and was reduced to relatively normal levels after treatment. Conversely, activity in the right Heschl's gyrus was significantly increased and extended beyond a relatively normal range after sound therapy. These changes were found to be positively correlated with tinnitus relief. The functional connectivity between the left parahippocampal gyrus and the cingulate cortex was higher in tinnitus patients after treatment. The alterations of local activity and functional connectivity in the left parahippocampal gyrus and right Heschl’s gyrus were associated with tinnitus relief. Resting-state functional magnetic resonance imaging can provide functional information to explain and ‘visualize’ the mechanism underlying the effect of sound therapy on the brain.


Author(s):  
Hernán C. Külsgaard ◽  
José I. Orlando ◽  
Mariana Bendersky ◽  
Juan P. Princich ◽  
Luis S.R. Manzanera ◽  
...  

1999 ◽  
Vol 82 (5) ◽  
pp. 2346-2357 ◽  
Author(s):  
Mitchell Steinschneider ◽  
Igor O. Volkov ◽  
M. Daniel Noh ◽  
P. Charles Garell ◽  
Matthew A. Howard

Voice onset time (VOT) is an important parameter of speech that denotes the time interval between consonant onset and the onset of low-frequency periodicity generated by rhythmic vocal cord vibration. Voiced stop consonants (/b/, /g/, and /d/) in syllable initial position are characterized by short VOTs, whereas unvoiced stop consonants (/p/, /k/, and t/) contain prolonged VOTs. As the VOT is increased in incremental steps, perception rapidly changes from a voiced stop consonant to an unvoiced consonant at an interval of 20–40 ms. This abrupt change in consonant identification is an example of categorical speech perception and is a central feature of phonetic discrimination. This study tested the hypothesis that VOT is represented within auditory cortex by transient responses time-locked to consonant and voicing onset. Auditory evoked potentials (AEPs) elicited by stop consonant-vowel (CV) syllables were recorded directly from Heschl's gyrus, the planum temporale, and the superior temporal gyrus in three patients undergoing evaluation for surgical remediation of medically intractable epilepsy. Voiced CV syllables elicited a triphasic sequence of field potentials within Heschl's gyrus. AEPs evoked by unvoiced CV syllables contained additional response components time-locked to voicing onset. Syllables with a VOT of 40, 60, or 80 ms evoked components time-locked to consonant release and voicing onset. In contrast, the syllable with a VOT of 20 ms evoked a markedly diminished response to voicing onset and elicited an AEP very similar in morphology to that evoked by the syllable with a 0-ms VOT. Similar response features were observed in the AEPs evoked by click trains. In this case, there was a marked decrease in amplitude of the transient response to the second click in trains with interpulse intervals of 20–25 ms. Speech-evoked AEPs recorded from the posterior superior temporal gyrus lateral to Heschl's gyrus displayed comparable response features, whereas field potentials recorded from three locations in the planum temporale did not contain components time-locked to voicing onset. This study demonstrates that VOT at least partially is represented in primary and specific secondary auditory cortical fields by synchronized activity time-locked to consonant release and voicing onset. Furthermore, AEPs exhibit features that may facilitate categorical perception of stop consonants, and these response patterns appear to be based on temporal processing limitations within auditory cortex. Demonstrations of similar speech-evoked response patterns in animals support a role for these experimental models in clarifying selected features of speech encoding.


Sign in / Sign up

Export Citation Format

Share Document