scholarly journals The rapid emergence of auditory object representations in cortex reflect central acoustic attributes

2019 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magentoencephalography (MEG), we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the MEG recordings. We show that sound tokens can be decoded from brain activity beginning 90 milliseconds after stimulus onset with peak decoding performance occurring at 155 milliseconds post stimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.

2020 ◽  
Vol 32 (1) ◽  
pp. 111-123 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


2013 ◽  
Vol 26 (5) ◽  
pp. 483-502 ◽  
Author(s):  
Antonia Thelen ◽  
Micah M. Murray

This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (∼100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formedviasingle-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions thus persist over time to impact memory retrieval and object discrimination.


2004 ◽  
Vol 16 (3) ◽  
pp. 503-522 ◽  
Author(s):  
Matthias M. Müller ◽  
Andreas Keil

In the present study, subjects selectively attended to the color of checkerboards in a feature-based attention paradigm. Induced gamma band responses (GBRs), the induced alpha band, and the event-related potential (ERP) were analyzed to uncover neuronal dynamics during selective feature processing. Replicating previous ERP findings, the selection negativity (SN) with a latency of about 160 msec was extracted. Furthermore, and similarly to previous EEG studies, a gamma band peak in a time window between 290 and 380 msec was found. This peak had its major energy in the 55to 70-Hz range and was significantly larger for the attended color. Contrary to previous human induced gamma band studies, a much earlier 40to 50-Hz peak in a time window between 160 and 220 msec after stimulus onset and, thus, concurrently to the SN was prominent with significantly more energy for attended as opposed to unattended color. The induced alpha band (9.8–11.7 Hz), on the other hand, exhibited a marked suppression for attended color in a time window between 450 and 600 msec after stimulus onset. A comparison of the time course of the 40to 50-Hz and 55to 70-Hz induced GBR, the induced alpha band, and the ERP revealed temporal coincidences for changes in the morphology of these brain responses. Despite these similarities in the time domain, the cortical source configuration was found to discriminate between induced GBRs and the SN. Our results suggest that large-scale synchronous high-frequency brain activity as measured in the human GBR play a specific role in attentive processing of stimulus features.


2017 ◽  
Author(s):  
Radoslaw M. Cichy ◽  
Nikolaus Kriegeskorte ◽  
Kamila M. Jozwik ◽  
Jasper J.F. van den Bosch ◽  
Ian Charest

1AbstractVision involves complex neuronal dynamics that link the sensory stream to behaviour. To capture the richness and complexity of the visual world and the behaviour it entails, we used an ecologically valid task with a rich set of real-world object images. We investigated how human brain activity, resolved in space with functional MRI and in time with magnetoencephalography, links the sensory stream to behavioural responses. We found that behaviour-related brain activity emerged rapidly in the ventral visual pathway within 200ms of stimulus onset. The link between stimuli, brain activity, and behaviour could not be accounted for by either category membership or visual features (as provided by an artificial deep neural network model). Our results identify behaviourally-relevant brain activity during object vision, and suggest that object representations guiding behaviour are complex and can neither be explained by visual features or semantic categories alone. Our findings support the view that visual representations in the ventral visual stream need to be understood in terms of their relevance to behaviour, and highlight the importance of complex behavioural assessment for human brain mapping.


2015 ◽  
Vol 29 (4) ◽  
pp. 135-146 ◽  
Author(s):  
Miroslaw Wyczesany ◽  
Szczepan J. Grzybowski ◽  
Jan Kaiser

Abstract. In the study, the neural basis of emotional reactivity was investigated. Reactivity was operationalized as the impact of emotional pictures on the self-reported ongoing affective state. It was used to divide the subjects into high- and low-responders groups. Independent sources of brain activity were identified, localized with the DIPFIT method, and clustered across subjects to analyse the visual evoked potentials to affective pictures. Four of the identified clusters revealed effects of reactivity. The earliest two started about 120 ms from the stimulus onset and were located in the occipital lobe and the right temporoparietal junction. Another two with a latency of 200 ms were found in the orbitofrontal and the right dorsolateral cortices. Additionally, differences in pre-stimulus alpha level over the visual cortex were observed between the groups. The attentional modulation of perceptual processes is proposed as an early source of emotional reactivity, which forms an automatic mechanism of affective control. The role of top-down processes in affective appraisal and, finally, the experience of ongoing emotional states is also discussed.


2012 ◽  
Vol 24 (2) ◽  
pp. 521-529 ◽  
Author(s):  
Frank Oppermann ◽  
Uwe Hassler ◽  
Jörg D. Jescheniak ◽  
Thomas Gruber

The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse–cheese) or not (e.g., crown–mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.


2004 ◽  
Vol 92 (5) ◽  
pp. 3030-3042 ◽  
Author(s):  
Jay Hegdé ◽  
David C. Van Essen

The firing rate of visual cortical neurons typically changes substantially during a sustained visual stimulus. To assess whether, and to what extent, the information about shape conveyed by neurons in visual area V2 changes over the course of the response, we recorded the responses of V2 neurons in awake, fixating monkeys while presenting a diverse set of static shape stimuli within the classical receptive field. We analyzed the time course of various measures of responsiveness and stimulus-related response modulation at the level of individual cells and of the population. For a majority of V2 cells, the response modulation was maximal during the initial transient response (40–80 ms after stimulus onset). During the same period, the population response was relatively correlated, in that V2 cells tended to respond similarly to specific subsets of stimuli. Over the ensuing 80–100 ms, the signal-to-noise ratio of individual cells generally declined, but to a lesser degree than the evoked-response rate during the corresponding time bins, and the response profiles became decorrelated for many individual cells. Concomitantly, the population response became substantially decorrelated. Our results indicate that the information about stimulus shape evolves dynamically and relatively rapidly in V2 during static visual stimulation in ways that may contribute to form discrimination.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Shira Baror ◽  
Biyu J He

Abstract Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context.


Author(s):  
Solveig Moldrheim

Most teachers have experienced various forms of prejudice expressed in the classroom. When one hears attitudes or opinions that go against school and society’s values, it is not always easy to know how to respond appropriately and wisely.Educators have a social responsibility both towards the individual and the community. Individuals are the learning subjects, but the context for learning is group-based. Teachers’ social responsibility entails both individuals that have expressed prejudice against a particular group, and those who identify themselves with this particular group. In addition, educators have responsibility for the group based learning arena, which all the individuals belong to. Beyond this, schools are expected to contribute to a democratic society. Preventing prejudice expressed in the class room will not only ensure a safer environment for the pupils, it will also contribute to society as a whole by promoting democratic values. In other words, there are several reasons why schools should work to prevent prejudice.Many have antipathies or prejudices against groups of people. However, but some groups are more often faced with prejudice than others. A prerequisite for the development of prejudice is the formation of categories. People are able to suppress their prejudices. Prejudice is not created in a vacuum; they are social stances that must be understood in the context of the specific human environment. Studies show that if a person has prejudices against Jews, for example, the person tends to be more disposed to have prejudices against other groups as well, such as for example gays, , Muslims and immigrants. This disposition is called "group focused enmity."When a child is between eight and twelve years, the child starts to check and correct its perception of the world. Before it reaches this stage, the child’s comments about out-groups mainly stems from other people’s instructions. Studies from the USA have shown that negativity towards people with a different skin color decreases from around the age of 10 compared to when the child was younger.An individual's attitudes are formed on the basis of that person's overall experience. Although formed individually, experience often take place in social interaction with other individuals. In social settings, people find their significant other, that is, an individual or individuals they may mirror and adjust to. Such individuals may include parents, siblings, friends and teachers. The school is therefore a very important arena for promoting positive attitudes.Albert Einstein allegedly claimed that "It is harder to crack a prejudice than an atom." But even if we take hold of prejudices and actively seek to fight them, it will require time and energy. The process of changing a person’s attitude is significantly longer than the process of developing a person’s academic knowledge and skill.It is often said that "prejudice must be fought with knowledge." Prejudice consists of both beliefs and attitudes. It is important that educators have access to constructive meeting arenas, read books, play games and watch movies that can make room for empathy. It is essential to find learning resources, initiatives and approaches that promote values such as empathy and community, and then creating positive experiences for the pupils.


2019 ◽  
Vol 31 (10) ◽  
pp. 1563-1572 ◽  
Author(s):  
Clayton Hickey ◽  
Daniele Pollicino ◽  
Giacomo Bertazzoli ◽  
Ludwig Barbaro

People are quicker to detect examples of real-world object categories in natural scenes than is predicted by classic attention theories. One explanation for this puzzle suggests that experience renders the visual system sensitive to midlevel features diagnosing target presence. These are detected without the need for spatial attention, much as occurs for targets defined by low-level features like color or orientation. The alternative is that naturalistic search relies on spatial attention but is highly efficient because global scene information can be used to quickly reject nontarget objects and locations. Here, we use ERPs to differentiate between these possibilities. Results show that hallmark evidence of ultrafast target detection in frontal brain activity is preceded by an index of spatially specific distractor suppression in visual cortex. Naturalistic search for heterogenous targets therefore appears to rely on spatial operations that act on neural object representations, as predicted by classic attention theory. People appear able to rapidly reject nontarget objects and locations, consistent with the idea that global scene information is used to constrain naturalistic search and increase search efficiency.


Sign in / Sign up

Export Citation Format

Share Document