auditory processes
Recently Published Documents


TOTAL DOCUMENTS

60
(FIVE YEARS 12)

H-INDEX

14
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


Author(s):  
Stephen Grossberg

Visual and auditory processes represent sensory information, but do not evaluate its importance for survival or success. Interactions between perceptual/cognitive and evaluative reinforcement/emotional/motivational mechanisms accomplish this. Cognitive-emotional resonances support conscious feelings, knowing their source, and controlling motivation and responses to acquire valued goals. Also explained is how emotions may affect behavior without being conscious, and how learning adaptively times actions to achieve desired goals. Breakdowns in cognitive-emotional resonances can cause symptoms of mental disorders such as depression, autism, schizophrenia, and ADHD, including explanations of how affective meanings fail to organize behavior when this happens. Historic trends in the understanding of cognition and emotion are summarized, including work of Chomsky and Skinner. Brain circuits of conditioned reinforcer learning and incentive motivational learning are modeled, including the inverted-U in conditioning as a function of interstimulus interval, secondary conditioning, and attentional blocking and unblocking. How humans and animals act as minimal adaptive predictors is explained using the CogEM model’s interactions between sensory cortices, amygdala, and orbitofrontal cortex. Cognitive-emotional properties solve phylogenetically ancient Synchronization and Persistence Problems using circuits that are conserved between mollusks and humans. Avalanche command circuits for learning arbitrary sequences of sensory-motor acts, dating back to crustacea, increase their sensitivity to environmental feedback as they morph over phylogeny into mammalian cognitive and emotional circuits. Antagonistic rebounds drive affective extinction. READ circuits model how life-long learning occurs without associative saturation or passive forgetting. Affective memories of opponent emotions like fear vs. relief can then persist until they are disconfirmed by environmental feedback.


Author(s):  
Marni Novick ◽  
Jay R. Lucker

Abstract Background Audiologists may choose to evaluate auditory temporal processing in assessing auditory processing abilities. Some may decide to use measures of nonverbal stimuli such as tonal or noise gap detection. Others may decide to use verbal measures such as time compressed sentences (TCS). Many may choose to use both. Purpose Since people typically come to audiologists for auditory processing testing complaining of problems processing verbal stimuli, the question arises whether measures of nonverbal stimuli provide evidence regarding a person's abilities to processing verbal stimuli. That is, are there significant correlations between measures of verbal stimuli and nonverbal stimuli that are used to evaluate auditory temporal processing? Research Design The present investigation is an exploratory study using file review of 104 people seen for routine auditory processing evaluations by the authors. Study Sample A file review was completed based on data from 104 people seen for auditory processing evaluations. Data Collection and Analyses The data from these 104 files were used to evaluate whether there are any correlations between verbal and nonverbal measures of auditory temporal processing. The verbal measure used was the TCS subtest of the SCAN-3 while the nonverbal measures included the gap detection screening from the SCAN-3 as well as the gaps-in-noise measures. Results from these tests were compared to determine whether any significant correlations were found based on results from Pearson product moment correlational analyses. Results None of the nonverbal measures were found to have a significant correlation with the TCS test findings based on the Pearson correlations used to analyze the data. Conclusion Results indicate that there are no significant correlations (relationships) between measures of auditory temporal processing using nonverbal stimuli versus verbal stimuli based on the tests used in the present investigation. These findings lead to a conclusion that tests using nonverbal stimuli are measuring different auditory processes than the measure of verbal stimuli used in the present investigation. Since people typically come complaining about understanding verbal input, it is concluded that audiologists should use some verbal measure of auditory temporal processing in their auditory processing test battery.


Author(s):  
Monika Lewandowska ◽  
Rafał Milner ◽  
Małgorzata Ganc ◽  
Elżbieta Włodarczyk ◽  
Joanna Dołżycka ◽  
...  

AbstractThere are discrepancies in the literature regarding the course of central auditory processes (CAP) maturation in typically developing children and adolescents. The purpose of the study was to provide an overview of age – related improvement in CAP in Polish primary and secondary school students aged 7–16 years. 180 children/adolescents, subdivided into 9 age categories, and 20 adults (aged 18–24 years) performed the Dichotic Digit Test (DDT), Duration Pattern Test (DPT), Frequency Pattern Test (FPT), Gap Detection Test (GDT) and adaptive Speech-in-Noise (aSpN). The 12-year-olds was retested after w week. We found the age effects only for the DDT, DPT and FPT. In the right ear DDT the 7-year-olds performed more poorly than all groups ≥12. In the left ear DDT both 7- and 8-year-olds achieved less correct responses compared with the 13-, 14-, 15-year-olds and with the adults. The right ear advantage was greater in the 7-year-olds than in the 15-year-olds and adult group. At the age of 7 there was lower DPT and FPT scores than in all participants ≥13 whereas the 8-year-olds obtained less correct responses in the FPT than all age categories ≥12. Almost all groups (except for the 7-year-olds) performed better in the DPT than FPT. The test-retest reliability for all tests was satisfactory. The study demonstrated that different CAP have their own patterns of improvement with age and some of them are specific for the Polish population. The psychoacoustic battery may be useful in screening for CAP disorders in Poland.


2021 ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promises in the range of behavioral abilities they allow, the processes underlying their use remains underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants’ auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.HighlightsTrained people spontaneously use processes shared with vision when hearing sounds from the deviceProcesses with conversion devices find roots both in vision and auditionTraining with a visual-to-auditory conversion device induces perceptual plasticity


2020 ◽  
Vol 10 (8) ◽  
pp. 531
Author(s):  
Yao Wang ◽  
Limeng Shi ◽  
Gaoyuan Dong ◽  
Zuoying Zhang ◽  
Ruijuan Chen

Transcranial electrical stimulation (tES) can adjust the membrane potential by applying a weak current on the scalp to change the related nerve activity. In recent years, tES has proven its value in studying the neural processes involved in human behavior. The study of central auditory processes focuses on the analysis of behavioral phenomena, including sound localization, auditory pattern recognition, and auditory discrimination. To our knowledge, studies on the application of tES in the field of hearing and the electrophysiological effects are limited. Therefore, we reviewed the neuromodulatory effect of tES on auditory processing, behavior, and cognitive function and have summarized the physiological effects of tES on the auditory cortex.


Author(s):  
Hui Wang ◽  
Hanbo Zhao ◽  
Keping Sun ◽  
Xiaobin Huang ◽  
Longru Jin ◽  
...  

Abstract High-frequency hearing is important for the survival of both echolocating bats and whales, but our understanding of its genetic basis is scattered and segmented. In this study, we combined RNA-Seq and comparative genomic analyses to obtain insights into the comprehensive gene expression profile of the cochlea and the adaptive evolution of hearing-related genes. A total of 144 genes were found to have been under positive selection in various species of echolocating bats and toothed whales, 34 of which were identified to be related to hearing behavior or auditory processes. Subsequently, multiple physiological processes associated with those genes were found to have adaptively evolved in echolocating bats and toothed whales, including cochlear bony development, antioxidant activity, ion balance, and homeostatic processes, along with signal transduction. In addition, abundant convergent/parallel genes and sites were detected between different pairs of echolocator species; however, no specific hearing-related physiological pathways were enriched by them and almost all of the convergent/parallel signals were selectively neutral, as previously reported. Notably, two adaptive parallel evolved sites in TECPR2 were shown to have been under positive selection, indicating their functional importance for the evolution of echolocation and high-frequency hearing in laryngeal echolocating bats. This study deepens our understanding of the genetic bases underlying high-frequency hearing in the cochlea of echolocating bats and toothed whales.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Emily B. J. Coffey ◽  
Trent Nicol ◽  
Travis White-Schwoch ◽  
Bharath Chandrasekaran ◽  
Jennifer Krizman ◽  
...  

Abstract The auditory frequency-following response (FFR) is a non-invasive index of the fidelity of sound encoding in the brain, and is used to study the integrity, plasticity, and behavioral relevance of the neural encoding of sound. In this Perspective, we review recent evidence suggesting that, in humans, the FFR arises from multiple cortical and subcortical sources, not just subcortically as previously believed, and we illustrate how the FFR to complex sounds can enhance the wider field of auditory neuroscience. Far from being of use only to study basic auditory processes, the FFR is an uncommonly multifaceted response yielding a wealth of information, with much yet to be tapped.


2019 ◽  
Author(s):  
D Lesenfants ◽  
T Francart

AbstractMany active neuroimaging paradigms rely on the assumption that the participant sustains attention to a task. However, in practice, there will be momentary distractions, potentially influencing the results. We investigated the effect of focal attention, objectively quantified using a measure of brain signal entropy, on cortical tracking of the speech envelope. The latter is a measure of neural processing of naturalistic speech. We let participants listen to 44 minutes of natural speech, while their electroencephalogram was recorded, and quantified both entropy and cortical envelope tracking. Focal attention affected the later brain responses to speech, between 100 and 300 ms latency. By only taking into account periods with higher attention, the measured cortical speech tracking improved by 47%. This illustrates the impact of the participant’s active engagement in the modeling of the brain-speech response and the importance of accounting for it. Our results suggests a cortico-cortical loop that initiates during the early-stages of the auditory processing, then propagates through the parieto-occipital and frontal areas, and finally impacts the later-latency auditory processes in a top-down fashion. The proposed framework could be transposed to other active electrophysiological paradigms (visual, somatosensory, etc) and help to control the impact of participants’ engagement on the results.


Sign in / Sign up

Export Citation Format

Share Document