7t fmri
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 36)

H-INDEX

14
(FIVE YEARS 3)

2022 ◽  
Vol 12 ◽  
Author(s):  
Olivia Campbell ◽  
Tamara Vanderwal ◽  
Alexander Mark Weber

Background: Temporal fractals are characterized by prominent scale-invariance and self-similarity across time scales. Monofractal analysis quantifies this scaling behavior in a single parameter, the Hurst exponent (H). Higher H reflects greater correlation in the signal structure, which is taken as being more fractal. Previous fMRI studies have observed lower H during conventional tasks relative to resting state conditions, and shown that H is negatively correlated with task difficulty and novelty. To date, no study has investigated the fractal dynamics of BOLD signal during naturalistic conditions.Methods: We performed fractal analysis on Human Connectome Project 7T fMRI data (n = 72, 41 females, mean age 29.46 ± 3.76 years) to compare H across movie-watching and rest.Results: In contrast to previous work using conventional tasks, we found higher H values for movie relative to rest (mean difference = 0.014; p = 5.279 × 10−7; 95% CI [0.009, 0.019]). H was significantly higher in movie than rest in the visual, somatomotor and dorsal attention networks, but was significantly lower during movie in the frontoparietal and default networks. We found no cross-condition differences in test-retest reliability of H. Finally, we found that H of movie-derived stimulus properties (e.g., luminance changes) were fractal whereas H of head motion estimates were non-fractal.Conclusions: Overall, our findings suggest that movie-watching induces fractal signal dynamics. In line with recent work characterizing connectivity-based brain state dynamics during movie-watching, we speculate that these fractal dynamics reflect the configuring and reconfiguring of brain states that occurs during naturalistic processing, and are markedly different than dynamics observed during conventional tasks.


Author(s):  
Emily J. Allen ◽  
Ghislain St-Yves ◽  
Yihan Wu ◽  
Jesse L. Breedlove ◽  
Jacob S. Prince ◽  
...  

2021 ◽  
Author(s):  
Omer Burak Demirel ◽  
Burhaneddin Yaman ◽  
Logan Dowdle ◽  
Steen Moeller ◽  
Luca Vizioli ◽  
...  
Keyword(s):  

2021 ◽  
pp. JN-RM-0806-21
Author(s):  
Lewis Crawford ◽  
Emily Mills ◽  
Theo Hanson ◽  
Paul M. Macey ◽  
Rebecca Glarin ◽  
...  

2021 ◽  
Vol 21 (9) ◽  
pp. 2055
Author(s):  
Rohit S. Kamath ◽  
Kimberly B. Weldon ◽  
Hannah R. Moser ◽  
Philip C. Burton ◽  
Scott R. Sponheim ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Pei Huang ◽  
Marta M. Correia ◽  
Catarina Rua ◽  
Christopher T. Rodgers ◽  
Richard N. Henson ◽  
...  

The arrival of submillimeter ultra high-field fMRI makes it possible to compare activation profiles across cortical layers. However, the blood oxygenation level dependent (BOLD) signal measured by gradient echo (GE) fMRI is biased toward superficial layers of the cortex, which is a serious confound for laminar analysis. Several univariate and multivariate analysis methods have been proposed to correct this bias. We compare these methods using computational simulations of 7T fMRI data from regions of interest (ROI) during a visual attention paradigm. We also tested the methods on a pilot dataset of human 7T fMRI data. The simulations show that two methods–the ratio of ROI means across conditions and a novel application of Deming regression–offer the most robust correction for superficial bias. Deming regression has the additional advantage that it does not require that the conditions differ in their mean activation over voxels within an ROI. When applied to the pilot dataset, we observed strikingly different layer profiles when different attention metrics were used, but were unable to discern any differences in laminar attention across layers when Deming regression or ROI ratio was applied. Our simulations demonstrates that accurate correction of superficial bias is crucial to avoid drawing erroneous conclusions from laminar analyses of GE fMRI data, and this is affirmed by the results from our pilot 7T fMRI data.


NeuroImage ◽  
2021 ◽  
pp. 118308
Author(s):  
Ashley A. Huggins ◽  
Carissa N. Weis ◽  
Elizabeth A. Parisi ◽  
Kenneth P. Bennett ◽  
Vladimir Miskovic ◽  
...  

2021 ◽  
Author(s):  
Oiwi Parker Jones ◽  
Natalie L Voets

A recent result shows that inner speech can, with proper care, be decoded to the same high-level of accuracy as articulated speech. This relies, however, on neural data obtained while subjects perform elicited tasks, such as covert reading and repeating, whereas a neural speech prosthetic will require the decoding of inner speech that is self-generated. Prior work has, moreover, emphasised differences between these two kinds of inner speech, raising the question of how well a decoder optimised for one will generalise to the other. In this study, we trained phoneme-level decoders on an atypically large, elicited inner speech dataset, previously acquired using 7T fMRI in a single subject. We then acquired a second self-generated inner speech dataset in the same subject. Although the decoders were trained exclusively on neural recordings obtained during elicited inner speech, they predicted unseen phonemes accurately in both elicited and self-generated test conditions, illustrating the viability of zero-shot task transfer. This has significant practical importance for the development of a neural speech prosthetic, as labelled data is far easier to acquire at scale for elicited than for self-generated inner speech. Indeed, elicited tasks may be the only option for acquiring labelled data in critical patient populations who cannot control their vocal articulators.


NeuroImage ◽  
2021 ◽  
Vol 231 ◽  
pp. 117818
Author(s):  
Sunhang Shi ◽  
Augix Guohua Xu ◽  
Yun-Yun Rui ◽  
Xiaotong Zhang ◽  
Lizabeth M. Romanski ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document