auditory object
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 12)

H-INDEX

22
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Polina Iamshchinina ◽  
Agnessa Karapetian ◽  
Daniel Kaiser ◽  
Radoslaw Martin Cichy

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study we used EEG (N=47) and time-resolved multivariate pattern analysis to investigate (1) the time course with which object category information emerges in the auditory modality and (2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that (1) that auditory object category representations can be reliably extracted from EEG signals and (2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, we did not find evidence for a shared supra-modal code, suggesting that the contents of the different sensory hierarchies are ultimately modality-unique.


Author(s):  
Pradeep Dheerendra ◽  
Nicolas Barascud ◽  
Sukhbinder Kumar ◽  
Tobias Overath ◽  
Timothy D. Griffiths

2021 ◽  
pp. 1-8
Author(s):  
Alexander A. Aabedi ◽  
Sofia Kakaizada ◽  
Jacob S. Young ◽  
EunSeon Ahn ◽  
Daniel H. Weissman ◽  
...  

OBJECTIVE Intraoperative tasks for awake language mapping are typically selected based on the language tracts that will likely be encountered during tumor resection. However, diminished attention and arousal secondary to perioperative sedatives may reduce a task’s usefulness for identifying eloquent cortex. For instance, accuracy in performing select language tasks may be high preoperatively but decline in the operating room. In the present study, the authors sought to identify language tasks that can be performed with high accuracy in both situational contexts so the neurosurgical team can be confident that speech errors committed during awake language mapping result from direct cortical stimulation to eloquent cortex, rather than from poor performance in general. METHODS We administered five language tasks to 44 patients: picture naming (PN), text reading (TR), auditory object naming (AN), repetition of 4-syllable words (4SYL), and production of syntactically intact sentences (SYNTAX). Performance was assessed using the 4-point scale of the quick aphasia battery 24 hours preoperatively and intraoperatively. We next determined whether or not accuracy on each task was higher preoperatively than intraoperatively. We also determined whether 1) intraoperative accuracy on a given task predicted intraoperative performance on the other tasks and 2) low preoperative accuracy on a task predicted a decrease in accuracy intraoperatively. RESULTS Relative to preoperative accuracy, intraoperative accuracy declined on PN (3.90 vs 3.82, p = 0.0001), 4SYL (3.96 vs 3.91, p = 0.0006), and SYNTAX (3.85 vs 3.67, p = 0.0001) but not on TR (3.96 vs 3.94, p = 0.13) or AN (3.70 vs 3.58, p = 0.058). Intraoperative accuracy on PN and AN independently predicted intraoperative accuracy on the remaining language tasks (p < 0.001 and p < 0.01, respectively). Finally, low preoperative accuracy on SYNTAX predicted a decrease in accuracy on this task intraoperatively (R2 = 0.36, p = 0.00002). CONCLUSIONS While TR lacks sensitivity in identifying language deficits at baseline, accuracy on TR is stable across testing settings. Baseline accuracy on the other four of our five language tasks was not predictive of intraoperative performance, signifying the need to repeat language tests prior to stimulation mapping to confirm reliability.


Author(s):  
Josef P. Rauschecker

When one talks about hearing, some may first imagine the auricle (or external ear), which is the only visible part of the auditory system in humans and other mammals. Its shape and size vary among people, but it does not tell us much about a person’s abilities to hear (except perhaps their ability to localize sounds in space, where the shape of the auricle plays a certain role). Most of what is used for hearing is inside the head, particularly in the brain. The inner ear transforms mechanical vibrations into electrical signals; then the auditory nerve sends these signals into the brainstem, where intricate preprocessing occurs. Although auditory brainstem mechanisms are an important part of central auditory processing, it is the processing taking place in the cerebral cortex (with the thalamus as the mediator), which enables auditory perception and cognition. Human speech and the appreciation of music can hardly be imagined without a complex cortical network of specialized regions, each contributing different aspects of auditory cognitive abilities. During the evolution of these abilities in higher vertebrates, especially birds and mammals, the cortex played a crucial role, so a great deal of what is referred to as central auditory processing happens there. Whether it is the recognition of one’s mother’s voice, listening to Pavarotti singing or Yo-Yo Ma playing the cello, hearing or reading Shakespeare’s sonnets, it will evoke electrical vibrations in the auditory cortex, but it does not end there. Large parts of frontal and parietal cortex receive auditory signals originating in auditory cortex, forming processing streams for auditory object recognition and auditory-motor control, before being channeled into other parts of the brain for comprehension and enjoyment.


2021 ◽  
Vol 12 ◽  
Author(s):  
Gauthier Berthomieu ◽  
Vincent Koehl ◽  
Mathieu Paquier

Loudness is the magnitude of the auditory sensation that a listener experiences when exposed to a sound. Several sound attributes are reported to affect loudness, such as the sound pressure level at the listener's ears and the spectral content. In addition to these physical attributes of the stimulus, some subjective attributes also appear to affect loudness. When presented with a sound, a listener interacts with an auditory object and can focus on several aspects of the latter. Loudness appears to differ depending on how listeners apprehend this object, notably whether they focus on the sound that reaches their ears or that is produced by the source. The way listeners focus on the auditory object may depend on the stimulus itself. For instance, they might be more likely to focus on the sound emitted by the source if the latter is visible. The instructions given by the experimenters can also explicitly direct the listener's focus on the sound reaching the ears or emitted by the source. The present review aims at understanding how listeners focus on the auditory object depending on the stimuli and instructions they are provided with, and to describe how loudness depends on this focus.


Author(s):  
Vasiliki Tsourtou

In the present cross-sectional experimental study we investigated infants’ early ability to intermodally detect numerosity of visual-auditory object-like and social stimuli. We assumed that presentation of face – voice stimuli would distract infants’ attention from detection of numerical invariant. Seventy-eight infants (aged 5, 7 and 9 months) participated in four experimental Conditions (simultaneously projected pairs of identical objects, non-identical objects, objects projected together with familiar face and objects projected together with unfamiliar face). Visual stimuli in each trial varied in numerosity (1 -2 / 1-3 / 2 -3) and they were accompanied by piano sounds or voice sounds also varying in numerosity (one, two or three sounds in La tonality). By means of preferential looking technique, we measured infants’ fixation of attention to the visual stimulus that numerically matched with the sound. When object-like stimuli were projected, infants –except 5-month-old boys –tended to intermodally detect numerical invariant. Shape similarity of the objects facilitated infants’ intermodal detection of numerosity. When socially salient stimuli were co-presented with object-like stimuli, infants preferred to look at the face, ignoring numerosity of the auditory stimulus. Nor sound quality (piano vs. voice) neither familiarity of the face (mother’s face vs. stranger woman’s face) affected infants’ perception. Although intermodal detection of perceptual cues is a primary function of both face and number perception, each one of these perceptual systems seems to follow a different developmental path.


2020 ◽  
Vol 32 (1) ◽  
pp. 111-123 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


2020 ◽  
Vol 44 (4) ◽  
pp. 60-76
Author(s):  
Stefan Riedel ◽  
Franz Zotter

Abstract Beamforming on the icosahedral loudspeaker (IKO), a compact, spherical loudspeaker array, was recently established and investigated as an instrument to produce auditory sculptures (i.e., 3-D sonic imagery) in electroacoustic music. Sound beams in the horizontal plane most effectively and expressively produce auditory objects via lateral reflections on sufficiently close walls and baffles. Can there be 3-D-printable arrays at drastically reduced cost and transducer count, but with similarly strong directivity in the horizontal plane? To find out, we adopt mixed-order Ambisonics schemes to control fewer, and predominantly horizontal, beam patterns, and we propose the 3|9|3 array as a suitable design, with beamforming crossing over to Ambisonics panning at high frequencies. Analytic models and measurements on hardware prototypes permit a comparison between the new design and the IKO regarding beamforming capacity. Moreover, we evaluate our 15-channel 3|9|3 prototype in listening experiments to find out whether the sculptural qualities and auditory object trajectories it produces are comparable to those of the 20-channel IKO.


2019 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magentoencephalography (MEG), we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the MEG recordings. We show that sound tokens can be decoded from brain activity beginning 90 milliseconds after stimulus onset with peak decoding performance occurring at 155 milliseconds post stimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


Sign in / Sign up

Export Citation Format

Share Document