Assessment of intra- and inter-regional interrelations between GABA+, Glx and BOLD during pain perception in the human brain – A combined 1H fMRS and fMRI study

Neuroscience ◽  
2017 ◽  
Vol 365 ◽  
pp. 125-136 ◽  
Author(s):  
Marianne Cleve ◽  
Alexander Gussew ◽  
Gerd Wagner ◽  
Karl-Jürgen Bär ◽  
Jürgen R. Reichenbach
2018 ◽  
Vol 40 (1) ◽  
pp. 151-162 ◽  
Author(s):  
Seyyed Iman Shirinbayan ◽  
Alexander M. Dreyer ◽  
Jochem W. Rieger

2017 ◽  
Author(s):  
Stefania Bracci ◽  
Ioannis Kalfas ◽  
Hans Op de Beeck

AbstractRecent studies showed agreement between how the human brain and neural networks represent objects, suggesting that we might start to understand the underlying computations. However, we know that the human brain is prone to biases at many perceptual and cognitive levels, often shaped by learning history and evolutionary constraints. Here we explore one such bias, namely the bias to perceive animacy, and used the performance of neural networks as a benchmark. We performed an fMRI study that dissociated object appearance (how an object looks like) from object category (animate or inanimate) by constructing a stimulus set that includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), and, crucially, inanimate objects that look like the animate objects (e.g., a cow-mug). Behavioral judgments and deep neural networks categorized images mainly by animacy, setting all objects (lookalike and inanimate) apart from the animate ones. In contrast, activity patterns in ventral occipitotemporal cortex (VTC) were strongly biased towards object appearance: animals and lookalikes were similarly represented and separated from the inanimate objects. Furthermore, this bias interfered with proper object identification, such as failing to signal that a cow-mug is a mug. The bias in VTC to represent a lookalike as animate was even present when participants performed a task requiring them to report the lookalikes as inanimate. In conclusion, VTC representations, in contrast to neural networks, fail to veridically represent objects when visual appearance is dissociated from animacy, probably due to a biased processing of visual features typical of animate objects.


NeuroImage ◽  
2003 ◽  
Vol 18 (4) ◽  
pp. 928-937 ◽  
Author(s):  
J Grèzes ◽  
J.L Armony ◽  
J Rowe ◽  
R.E Passingham
Keyword(s):  

2020 ◽  
Author(s):  
Long Tang ◽  
Toshimitsu Takahashi ◽  
Tamami Shimada ◽  
Masayuki Komachi ◽  
Noriko Imanishi ◽  
...  

Abstract The position of any event in time could be in the present, past, or future. This temporal discrimination is vitally important in our daily conversations, but it remains elusive how the human brain distinguishes among the past, present, and future. To address this issue, we searched for neural correlates of presentness, pastness, and futurity, each of which is automatically evoked when we hear sentences such as “it is raining now,” “it rained yesterday,” or “it will rain tomorrow.” Here, we show that sentences that evoked “presentness” activated the bilateral precuneus more strongly than those that evoked “pastness” or “futurity.” Interestingly, this contrast was shared across native speakers of Japanese, English, and Chinese languages, which vary considerably in their verb tense systems. The results suggest that the precuneus serves as a key region that provides the origin (that is, the Now) of our time perception irrespective of differences in tense systems across languages.


NeuroImage ◽  
2000 ◽  
Vol 11 (5) ◽  
pp. S250 ◽  
Author(s):  
Håkan Fischer ◽  
Christopher I. Wright ◽  
Paul J. Whalen ◽  
Sean C. McInerney ◽  
Lisa M. Shin ◽  
...  

2011 ◽  
Vol 2011.49 (0) ◽  
pp. 203-204
Author(s):  
Yuya Kawata ◽  
Chunlin Li ◽  
Satoshi Takahashi ◽  
Jinglong Wu

Author(s):  
I-Wen Su ◽  
Fang-Wei Wu ◽  
Keng-Chen Liang ◽  
Kai-Yuan Cheng ◽  
Sung-Tsang Hsieh ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document