scholarly journals Reconstructing meaning from bits of information

2018 ◽  
Author(s):  
Sasa L. Kivisaari ◽  
Marijn van Vliet ◽  
Annika Hultén ◽  
Tiina Lindh-Knuutila ◽  
Ali Faisal ◽  
...  

AbstractWe can easily identify a dog merely by the sound of barking or an orange by its citrus scent. In this work, we study the neural underpinnings of how the brain combines bits of information into meaningful object representations. Modern theories of semantics posit that the meaning of words can be decomposed into a unique combination of individual semantic features (e.g., “barks”, “has citrus scent”). Here, participants received clues of individual objects in form of three isolated semantic features, given as verbal descriptions. We used machine-learning-based neural decoding to learn a mapping between individual semantic features and BOLD activation patterns. We discovered that the recorded brain patterns were best decoded using a combination of not only the three semantic features that were presented as clues, but a far richer set of semantic features typically linked to the target object. We conclude that our experimental protocol allowed us to observe how fragmented information is combined into a complete semantic representation of an object and suggest neuroanatomical underpinnings for this process.


Author(s):  
Xiujun Li ◽  
Chunlin Li ◽  
Jinglong Wu ◽  
Qiyong Guo

Recent event-related fMRI studies suggest that a left-lateralized network exists for reading Chinese words (to contrast two-character Chinese words and figures). In this study, the authors used a 3T fMRI to investigate brain activation when processing characters and figures in a visual discrimination task. Thirteen Chinese individuals were shown two Chinese characters (36 pairs) or two figures (36 pairs). The control task (two figures) was used to eliminate non-linguistic visual and motor confounds. The results showed that discrimination of Chinese characters is performed by a bilateral network that processes orthographic, phonological, and semantic features. Significant activation patterns were observed in the occipital region (BA17, 18, 19, and 37), temporal region (BA22 and 38), parietal region (BA7, 39, and 40), and frontal region (BA4, 6, 10, and 46) of the brain and in the cerebellum. The study concludes that a constellation of neural substrates provides a bilateral network that processes Chinese subjects.



2018 ◽  
Vol 29 (6) ◽  
pp. 2396-2411 ◽  
Author(s):  
Andrew James Anderson ◽  
Edmund C Lalor ◽  
Feng Lin ◽  
Jeffrey R Binder ◽  
Leonardo Fernandino ◽  
...  

Abstract Deciphering how sentence meaning is represented in the brain remains a major challenge to science. Semantically related neural activity has recently been shown to arise concurrently in distributed brain regions as successive words in a sentence are read. However, what semantic content is represented by different regions, what is common across them, and how this relates to words in different grammatical positions of sentences is weakly understood. To address these questions, we apply a semantic model of word meaning to interpret brain activation patterns elicited in sentence reading. The model is based on human ratings of 65 sensory/motor/emotional and cognitive features of experience with words (and their referents). Through a process of mapping functional Magnetic Resonance Imaging activation back into model space we test: which brain regions semantically encode content words in different grammatical positions (e.g., subject/verb/object); and what semantic features are encoded by different regions. In left temporal, inferior parietal, and inferior/superior frontal regions we detect the semantic encoding of words in all grammatical positions tested and reveal multiple common components of semantic representation. This suggests that sentence comprehension involves a common core representation of multiple words’ meaning being encoded in a network of regions distributed across the brain.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florian Bitsch ◽  
Philipp Berger ◽  
Andreas Fink ◽  
Arne Nagels ◽  
Benjamin Straube ◽  
...  

AbstractThe ability to generate humor gives rise to positive emotions and thus facilitate the successful resolution of adversity. Although there is consensus that inhibitory processes might be related to broaden the way of thinking, the neural underpinnings of these mechanisms are largely unknown. Here, we use functional Magnetic Resonance Imaging, a humorous alternative uses task and a stroop task, to investigate the brain mechanisms underlying the emergence of humorous ideas in 24 subjects. Neuroimaging results indicate that greater cognitive control abilities are associated with increased activation in the amygdala, the hippocampus and the superior and medial frontal gyrus during the generation of humorous ideas. Examining the neural mechanisms more closely shows that the hypoactivation of frontal brain regions is associated with an hyperactivation in the amygdala and vice versa. This antagonistic connectivity is concurrently linked with an increased number of humorous ideas and enhanced amygdala responses during the task. Our data therefore suggests that a neural antagonism previously related to the emergence and regulation of negative affective responses, is linked with the generation of emotionally positive ideas and may represent an important neural pathway supporting mental health.



2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Bob Jacobs ◽  
Heather Rally ◽  
Catherine Doyle ◽  
Lester O’Brien ◽  
Mackenzie Tennison ◽  
...  

Abstract The present review assesses the potential neural impact of impoverished, captive environments on large-brained mammals, with a focus on elephants and cetaceans. These species share several characteristics, including being large, wide-ranging, long-lived, cognitively sophisticated, highly social, and large-brained mammals. Although the impact of the captive environment on physical and behavioral health has been well-documented, relatively little attention has been paid to the brain itself. Here, we explore the potential neural consequences of living in captive environments, with a focus on three levels: (1) The effects of environmental impoverishment/enrichment on the brain, emphasizing the negative neural consequences of the captive/impoverished environment; (2) the neural consequences of stress on the brain, with an emphasis on corticolimbic structures; and (3) the neural underpinnings of stereotypies, often observed in captive animals, underscoring dysregulation of the basal ganglia and associated circuitry. To this end, we provide a substantive hypothesis about the negative impact of captivity on the brains of large mammals (e.g., cetaceans and elephants) and how these neural consequences are related to documented evidence for compromised physical and psychological well-being.



2021 ◽  
Author(s):  
Mo Shahdloo ◽  
Emin Çelik ◽  
Burcu A Urgen ◽  
Jack L. Gallant ◽  
Tolga Çukur

Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied human brain activity recorded via functional magnetic resonance imaging while subjects viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.



2020 ◽  
Author(s):  
Bryony Goulding Mew ◽  
Darije Custovic ◽  
Eyal Soreq ◽  
Romy Lorenz ◽  
Ines Violante ◽  
...  

AbstractFlexible behaviour requires cognitive-control mechanisms to efficiently resolve conflict between competing information and alternative actions. Whether a global neural resource mediates all forms of conflict or this is achieved within domainspecific systems remains debated. We use a novel fMRI paradigm to orthogonally manipulate rule, response and stimulus-based conflict within a full-factorial design. Whole-brain voxelwise analyses show that activation patterns associated with these conflict types are distinct but partially overlapping within Multiple Demand Cortex (MDC), the brain regions that are most commonly active during cognitive tasks. Region of interest analysis shows that most MDC sub-regions are activated for all conflict types, but to significantly varying levels. We propose that conflict resolution is an emergent property of distributed brain networks, the functional-anatomical components of which place on a continuous, not categorical, scale from domain-specialised to domain general. MDC brain regions place towards one end of that scale but display considerable functional heterogeneity.



2018 ◽  
Author(s):  
Maria Montefinese ◽  
Erin Michelle Buchanan ◽  
David Vinson

Models of semantic representation predict that automatic priming is determined by associative and co-occurrence relations (i.e., spreading activation accounts), or to similarity in words' semantic features (i.e., featural models). Although, these three factors are correlated in characterizing semantic representation, they seem to tap different aspects of meaning. We designed two lexical decision experiments to dissociate these three different types of meaning similarity. For unmasked primes, we observed priming only due to association strength and not the other two measures; and no evidence for differences in priming for concrete and abstract concepts. For masked primes there was no priming regardless of the semantic relation. These results challenge theoretical accounts of automatic priming. Rather, they are in line with the idea that priming may be due to participants’ controlled strategic processes. These results provide important insight about the nature of priming and how association strength, as determined from word-association norms, relates to the nature of semantic representation.



Psihologija ◽  
2010 ◽  
Vol 43 (2) ◽  
pp. 155-165 ◽  
Author(s):  
Vanja Kovic ◽  
Kim Plunkett ◽  
Gert Westermann

In this paper we present an ERP study examining the underlying nature of semantic representation of animate and inanimate objects. Time-locking ERP signatures to the onset of auditory stimuli we found topological similarities in animate and inanimate object processing. Moreover, we found no difference between animates and inanimates in the N400 amplitude, when mapping more specific to more general representation (visual to auditory stimuli). These studies provide further evidence for the theory of unitary semantic organization, but no support for the feature-based prediction of segregated conceptual organization. Further comparisons of animate vs. inanimate matches and within-vs. between-category mismatches revealed following results: processing of animate matches elicited more positivity than processing of inanimates within the N400 time-window; also, inanimate mismatches elicited a stronger N400 than did animate mismatches. Based on these findings we argue that one of the possible explanations for finding different and sometimes contradictory results in the literature regarding processing and representations of animates and inanimates in the brain could lie in the variability of selected items within each of the categories, that is, homogeneity of the categories.



2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.



2002 ◽  
Vol 45 (2) ◽  
pp. 332-346 ◽  
Author(s):  
Karla K. McGregor ◽  
Rena M. Friedman ◽  
Renée M. Reilly ◽  
Robyn M. Newman

Children's semantic representations and semantic naming errors were the focus of this study. In Experiment 1, 25 normally developing children (mean age=5 years 4 months) named, drew, and defined 20 age-appropriate objects. The results suggested that functional and physical properties are core aspects of object representations in the semantic lexicon and that these representations are often organized and accessed according to a taxonomic hierarchy. Results of a new procedure, comparative picture naming/picture drawing, suggested that the degree of knowledge in the semantic lexicon makes words more or less vulner-able to retrieval failure. Most semantic naming errors were associated with limited semantic knowledge, manifested as either lexical gaps or fragile representations. Comparison of definitions for correctly named and semantically misnamed objects provided converging evidence for this conclusion. In Experiment 2, involving 16 normally developing children (mean age=5 years 5 months), the comparative picture naming/picture drawing results were replicated with a stimulus set that allowed a priori matching of the visual complexity of items drawn from correct and semantic error pools. Discussion focuses on the dynamic nature of semantic representations and the relation between semantic representation and naming during a period of slow mapping. The value of comparative picture naming/ picture drawing as a new method for exploring children's semantic representa-tions is emphasized.



Sign in / Sign up

Export Citation Format

Share Document