scholarly journals Contextual Information Rhythmically Processeed in the Brain

2000 ◽  
Vol 120 (8-9) ◽  
pp. 1068-1071
Author(s):  
Yoko Yamaguchi
2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Katrina R. Quinn ◽  
Lenka Seillier ◽  
Daniel A. Butts ◽  
Hendrikje Nienborg

AbstractFeedback in the brain is thought to convey contextual information that underlies our flexibility to perform different tasks. Empirical and computational work on the visual system suggests this is achieved by targeting task-relevant neuronal subpopulations. We combine two tasks, each resulting in selective modulation by feedback, to test whether the feedback reflected the combination of both selectivities. We used visual feature-discrimination specified at one of two possible locations and uncoupled the decision formation from motor plans to report it, while recording in macaque mid-level visual areas. Here we show that although the behavior is spatially selective, using only task-relevant information, modulation by decision-related feedback is spatially unselective. Population responses reveal similar stimulus-choice alignments irrespective of stimulus relevance. The results suggest a common mechanism across tasks, independent of the spatial selectivity these tasks demand. This may reflect biological constraints and facilitate generalization across tasks. Our findings also support a previously hypothesized link between feature-based attention and decision-related activity.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 320
Author(s):  
Yue Zhao ◽  
Xiaoqiang Ren ◽  
Kun Hou ◽  
Wentao Li

Automated brain tumor segmentation based on 3D magnetic resonance imaging (MRI) is critical to disease diagnosis. Moreover, robust and accurate achieving automatic extraction of brain tumor is a big challenge because of the inherent heterogeneity of the tumor structure. In this paper, we present an efficient semantic segmentation 3D recurrent multi-fiber network (RMFNet), which is based on encoder–decoder architecture to segment the brain tumor accurately. 3D RMFNet is applied in our paper to solve the problem of brain tumor segmentation, including a 3D recurrent unit and 3D multi-fiber unit. First of all, we propose that recurrent units segment brain tumors by connecting recurrent units and convolutional layers. This quality enhances the model’s ability to integrate contextual information and is of great significance to enhance the contextual information. Then, a 3D multi-fiber unit is added to the overall network to solve the high computational cost caused by the use of a 3D network architecture to capture local features. 3D RMFNet combines both advantages from a 3D recurrent unit and 3D multi-fiber unit. Extensive experiments on the Brain Tumor Segmentation (BraTS) 2018 challenge dataset show that our RMFNet remarkably outperforms state-of-the-art methods, and achieves average Dice scores of 89.62%, 83.65% and 78.72% for the whole tumor, tumor core and enhancing tumor, respectively. The experimental results prove our architecture to be an efficient tool for brain tumor segmentation accurately.


2012 ◽  
Vol 367 (1591) ◽  
pp. 932-941 ◽  
Author(s):  
P. C. Klink ◽  
R. J. A. van Wezel ◽  
R. van Ee

Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception whenever one perceptual interpretation is dominant, and the instability of perception that causes perceptual dominance to alternate between perceptual interpretations upon extended viewing. This review summarizes several ways in which contextual information can help the brain resolve visual ambiguities and construct temporarily stable perceptual experiences. Temporal context through prior stimulation or internal brain states brought about by feedback from higher cortical processing levels may alter the response characteristics of specific neurons involved in rivalry resolution. Furthermore, spatial or crossmodal context may strengthen the neuronal representation of one of the possible perceptual interpretations and consequently bias the rivalry process towards it. We suggest that contextual influences on perceptual choices with ambiguous visual stimuli can be highly informative about the neuronal mechanisms of context-driven inference in the general processes of perceptual decision-making.


2021 ◽  
Author(s):  
Robert Hoskin ◽  
Deborah Talmi

Background: To reduce the computational demands of the task of determining values, the brain is thought to engage in adaptive coding, where the sensitivity of some neurons to value is modulated by contextual information. There is good behavioural evidence that pain is coded adaptively, but controversy regarding the underlying neural mechanism. Additionally, there is evidence that reward prediction errors are coded adaptively, but no parallel evidence regarding pain prediction errors. Methods: We tested the hypothesis that pain prediction errors are coded adaptively by scanning 19 healthy adults with fMRI while they performed a cued pain task. Our analysis followed an axiomatic approach. Results: We found that the left anterior insula was the only region which was sensitive both to predicted pain magnitudes and the unexpectedness of pain delivery, but not to the magnitude of delivered pain. Conclusions: This pattern suggests that the left anterior insula is part of a neural mechanism that serves the adaptive prediction error of pain.


2021 ◽  
Vol 17 (5) ◽  
pp. e1008985
Author(s):  
Olivia L. Calvin ◽  
A. David Redish

Poor context integration, the process of incorporating both previous and current information in decision making, is a cognitive symptom of schizophrenia. The maintenance of the contextual information has been shown to be sensitive to changes in excitation-inhibition (EI) balance. Many regions of the brain are sensitive to EI imbalances, however, so it is unknown how systemic manipulations affect the specific regions that are important to context integration. We constructed a multi-structure, biophysically-realistic agent that could perform context-integration as is assessed by the dot pattern expectancy task. The agent included a perceptual network, a memory network, and a decision making system and was capable of successfully performing the dot pattern expectancy task. Systemic manipulation of the agent’s EI balance produced localized dysfunction of the memory structure, which resulted in schizophrenia-like deficits at context integration. When the agent’s pyramidal cells were less excitatory, the agent fixated upon the cue and initiated responding later than the default agent, which were like the deficits one would predict that individuals on the autistic spectrum would make. This modelling suggests that it may be possible to parse between different types of context integration deficits by adding distractors to context integration tasks and by closely examining a participant’s reaction times.


Author(s):  
Farran Briggs

Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.


2008 ◽  
Vol 20 (12) ◽  
pp. 2226-2237 ◽  
Author(s):  
Elissa Aminoff ◽  
Daniel L. Schacter ◽  
Moshe Bar

Everyday contextual settings create associations that later afford generating predictions about what objects to expect in our environment. The cortical network that takes advantage of such contextual information is proposed to connect the representation of associated objects such that seeing one object (bed) will activate the visual representations of other objects sharing the same context (pillow). Given this proposal, we hypothesized that the cortical activity elicited by seeing a strong contextual object would predict the occurrence of false memories whereby one erroneously “remembers” having seen a new object that is related to a previously presented object. To test this hypothesis, we used functional magnetic resonance imaging during encoding of contextually related objects, and later tested recognition memory. New objects that were contextually related to previously presented objects were more often falsely judged as “old” compared with new objects that were contextually unrelated to old objects. This phenomenon was reflected by activity in the cortical network mediating contextual processing, which provides a better understanding of how the brain represents and processes context.


2019 ◽  
Vol 14 (7) ◽  
pp. 709-718 ◽  
Author(s):  
Hannah U Nohlen ◽  
Frenk van Harreveld ◽  
William A Cunningham

Abstract In the current study, we used functional magnetic resonance imaging to investigate how the brain facilitates social judgments despite evaluatively conflicting information. Participants learned consistent (positive or negative) and ambivalent (positive and negative) person information and were then asked to provide binary judgments of these targets in situations that either resolved conflict by prioritizing a subset of information or not. Self-report, decision time and brain data confirm that integrating contextual information into our evaluations of objects or people allows for nuanced (social) evaluations. The same mixed trait information elicited or failed to elicit evaluative conflict dependent on the situation. Crucially, we provide data suggesting that negative judgments are easier and may be considered the ‘default’ action when experiencing evaluative conflict: weaker activation in dorsolateral prefrontal cortex during trials of evaluative conflict was related to a greater likelihood of unfavorable judgments, and greater activation was related to more favorable judgments. Since negative outcome consequences are arguably more detrimental and salient, this finding supports the idea that additional regulation and a more active selection process are necessary to override an initial negative response to evaluatively conflicting information.


2012 ◽  
Vol 24 (9) ◽  
pp. 1941-1959 ◽  
Author(s):  
Chun-Yu Tse ◽  
Kathy A. Low ◽  
Monica Fabiani ◽  
Gabriele Gratton

The significance of stimuli is linked not only to their nature but also to the sequential structure in which they are embedded, which gives rise to contingency rules. Humans have an extraordinary ability to extract and exploit these rules, as exemplified by the role of grammar and syntax in language. To study the brain representations of contingency rules, we recorded ERPs and event-related optical signal (EROS; which uses near-infrared light to measure the optical changes associated with neuronal responses). We used sequences of high- and low-frequency tones varying according to three contingency rules, which were orthogonally manipulated and differed in processing requirements: A Single Repetition rule required only template matching, a Local Probability rule required relating a stimulus to its context, and a Global Probability rule could be derived through template matching or with reference to the global sequence context. ERP activity at 200–300 msec was related to the Single Repetition and Global Probability rules (reflecting access to representations based on template matching), whereas longer-latency activity (300-450 msec) was related to the Local Probability and Global Probability rules (reflecting access to representations incorporating contextual information). EROS responses with corresponding latencies indicated that the earlier activity involved the superior temporal gyrus, whereas later responses involved a fronto-parietal network. This suggests that the brain can simultaneously hold different models of stimulus contingencies at different levels of the information processing system according to their processing requirements, as indicated by the latency and location of the corresponding brain activity.


2021 ◽  
Author(s):  
Michiko Kawai ◽  
Yuichi Abe ◽  
Masato Yumoto ◽  
Masaya Kubota

AbstractLandau–Kleffner syndrome (LKS) is a rare neurological disorder characterized by acquired aphasia. LKS presents with distinctive electroencephalography (EEG) findings, including diffuse continuous spike and wave complexes (CSW), particularly during sleep. There has been little research on the mechanisms of aphasia and its origin within the brain and how it recovers. We diagnosed LKS in a 4-year-old female with an epileptogenic zone located primarily in the right superior temporal gyrus or STG (nondominant side). In the course of her illness, she had early signs of motor aphasia recovery but was slow to regain language comprehension and recover from hearing loss. We suggest that the findings from our patient's brain imaging and the disparity between her recovery from expressive and receptive aphasias are consistent with the dual-stream model of speech processing in which the nondominant hemisphere also plays a significant role in language comprehension. Unlike aphasia in adults, the right-hemisphere disorder has been reported to cause delays in language comprehension and gestures in early childhood. In the period of language acquisition, it requires a process of understanding what the words mean by integrating and understanding the visual, auditory, and contextual information. It is thought that the right hemisphere works predominantly with respect to its integrating role.


Sign in / Sign up

Export Citation Format

Share Document