auditory input
Recently Published Documents


TOTAL DOCUMENTS

174
(FIVE YEARS 53)

H-INDEX

28
(FIVE YEARS 3)

Cognition ◽  
2022 ◽  
Vol 221 ◽  
pp. 104982
Author(s):  
Stefan E. Huber ◽  
Markus Martini ◽  
Pierre Sachse
Keyword(s):  

2021 ◽  
Author(s):  
Kelsey Klein ◽  
Elizabeth Walker ◽  
Bob McMurray

Objective: The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access.Design: Participants were children ages 9-12 years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children’s fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical and semantic activation.Results: Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort, and increased fixations to the rhyme, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, though this delay was attributable to their delay in activating words in general, not to a distinct semantic source.Conclusions: Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical activation directly led to delayed semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children’s ability to understand connected speech in everyday life.


2021 ◽  
Vol 15 ◽  
Author(s):  
Fabian Kiepe ◽  
Nils Kraus ◽  
Guido Hesselmann

Self-generated auditory input is perceived less loudly than the same sounds generated externally. The existence of this phenomenon, called Sensory Attenuation (SA), has been studied for decades and is often explained by motor-based forward models. Recent developments in the research of SA, however, challenge these models. We review the current state of knowledge regarding theoretical implications about the significance of Sensory Attenuation and its role in human behavior and functioning. Focusing on behavioral and electrophysiological results in the auditory domain, we provide an overview of the characteristics and limitations of existing SA paradigms and highlight the problem of isolating SA from other predictive mechanisms. Finally, we explore different hypotheses attempting to explain heterogeneous empirical findings, and the impact of the Predictive Coding Framework in this research area.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258322
Author(s):  
Mareike Brych ◽  
Supriya Murali ◽  
Barbara Händel

The blink rate increases if a person indulges in a conversation compared to quiet rest. Since various factors were suggested to explain this increase, the present series of studies tested the influence of different motor activities, cognitive processes and auditory input on the blink behavior but at the same time minimized visual stimulation as well as social influences. Our results suggest that neither cognitive demands without verbalization, nor isolated lip, jaw or tongue movements, nor auditory input during vocalization or listening influence our blinking behavior. In three experiments, we provide evidence that complex facial movements during unvoiced speaking are the driving factors that increase blinking. If the complexity of the motor output increased such as during the verbalization of speech, the blink rate rose even more. Similarly, complex facial movements without cognitive demands, such as sucking on a lollipop, increased the blink rate. Such purely motor-related influences on blinking advise caution particularly when using blink rates assessed during patient interviews as a neurological indicator.


Author(s):  
Ella Z. Lattenkamp ◽  
Meike Linnenschmidt ◽  
Eva Mardus ◽  
Sonja C. Vernes ◽  
Lutz Wiegrebe ◽  
...  

Human vocal development and speech learning require acoustic feedback, and humans who are born deaf do not acquire a normal adult speech capacity. Most other mammals display a largely innate vocal repertoire. Like humans, bats are thought to be one of the few taxa capable of vocal learning as they can acquire new vocalizations by modifying vocalizations according to auditory experiences. We investigated the effect of acoustic deafening on the vocal development of the pale spear-nosed bat. Three juvenile pale spear-nosed bats were deafened, and their vocal development was studied in comparison with an age-matched, hearing control group. The results show that during development the deafened bats increased their vocal activity, and their vocalizations were substantially altered, being much shorter, higher in pitch, and more aperiodic than the vocalizations of the control animals. The pale spear-nosed bat relies on auditory feedback for vocal development and, in the absence of auditory input, species-atypical vocalizations are acquired. This work serves as a basis for further research using the pale spear-nosed bat as a mammalian model for vocal learning, and contributes to comparative studies on hearing impairment across species. This article is part of the theme issue ‘Vocal learning in animals and humans’.


2021 ◽  
Vol 62 (3) ◽  
pp. 340-343
Author(s):  
Alison M. Mahoney

Because sensory theatre productions are designed with neurodiverse audiences in mind, practitioners are first and foremost concerned with accessibility at all levels for their audience members, incorporating multiple senses throughout a performance to allow a variety of entry points for audiences that may have wildly divergent—and often competing—access needs. One-to-one interaction between performers and audience members results in highly flexible performances that respond to physical and auditory input from individual audience members, through which performers curate customized multisensory experiences that communicate the production's theatrical world to its audience. Given this reliance on close-up interaction, the circumstances surrounding the COVID-19 pandemic have posed a particular challenge for sensory theatre makers. In in-person sensory theatre, performers focus on neurodivergent audience members, with parents and paid carers often taking a (literal) back seat, but remotely delivered sensory theatre during COVID-19 hinges on the carer's facilitation of sensory engagement curated by sensory theatre practitioners. Oily Cart, a pioneering London-based sensory theatre company, responded to COVID-19 restrictions with a season of work presented in various formats in audiences’ homes, and their production Space to Be marked a shift in the company's audience engagement to include an emphasis on the carer's experience.1 Using this production as a case study, I argue that the pivotal role adopted by carers during the pandemic has the potential to shape future in-person productions, moving practitioners toward a more holistic, neurodiverse audience experience that challenges a disabled–nondisabled binary by embracing carers’ experiences alongside those of neurodivergent audience members.2


2021 ◽  
Vol 15 ◽  
Author(s):  
Rui Guo ◽  
Yang Li ◽  
Jiao Liu ◽  
Shusheng Gong ◽  
Ke Liu

Hearing is one of the most important senses needed for survival, and its loss is an independent risk factor for dementia. Hearing loss (HL) can lead to communication difficulties, social isolation, and cognitive dysfunction. The hippocampus is a critical brain region being greatly involved in the formation of learning and memory and is critical not only for declarative memory but also for social memory. However, until today, whether HL can affect learning and memory is poorly understood. This study aimed to identify the relationship between HL and hippocampal-associated cognitive function. Mice with complete auditory input elimination before the onset of hearing were used as the animal model. They were first examined via auditory brainstem response (ABR) to confirm hearing elimination, and behavior estimations were applied to detect social memory capacity. We found significant impairment of social memory in mice with HL compared with the controls (p < 0.05); however, no significant differences were seen in the tests of novel object recognition, Morris water maze (MWM), and locomotion in the open field (p > 0.05). Therefore, our study firstly demonstrates that hearing input is required for the formation of social memory, and hearing stimuli play an important role in the development of normal cognitive ability.


2021 ◽  
Vol 15 ◽  
Author(s):  
Viorica Marian ◽  
Sayuri Hayakawa ◽  
Scott R. Schroeder

How we perceive and learn about our environment is influenced by our prior experiences and existing representations of the world. Top-down cognitive processes, such as attention and expectations, can alter how we process sensory stimuli, both within a modality (e.g., effects of auditory experience on auditory perception), as well as across modalities (e.g., effects of visual feedback on sound localization). Here, we demonstrate that experience with different types of auditory input (spoken words vs. environmental sounds) modulates how humans remember concurrently-presented visual objects. Participants viewed a series of line drawings (e.g., picture of a cat) displayed in one of four quadrants while listening to a word or sound that was congruent (e.g., “cat” or <meow>), incongruent (e.g., “motorcycle” or <vroom–vroom>), or neutral (e.g., a meaningless pseudoword or a tonal beep) relative to the picture. Following the encoding phase, participants were presented with the original drawings plus new drawings and asked to indicate whether each one was “old” or “new.” If a drawing was designated as “old,” participants then reported where it had been displayed. We find that words and sounds both elicit more accurate memory for what objects were previously seen, but only congruent environmental sounds enhance memory for where objects were positioned – this, despite the fact that the auditory stimuli were not meaningful spatial cues of the objects’ locations on the screen. Given that during real-world listening conditions, environmental sounds, but not words, reliably originate from the location of their referents, listening to sounds may attune the visual dorsal pathway to facilitate attention and memory for objects’ locations. We propose that audio-visual associations in the environment and in our previous experience jointly contribute to visual memory, strengthening visual memory through exposure to auditory input.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
L. Godenzini ◽  
D. Alwis ◽  
R. Guzulaitis ◽  
S. Honnuraiah ◽  
G. J. Stuart ◽  
...  

AbstractThe capacity of the brain to encode multiple types of sensory input is key to survival. Yet, how neurons integrate information from multiple sensory pathways and to what extent this influences behavior is largely unknown. Using two-photon Ca2+ imaging, optogenetics and electrophysiology in vivo and in vitro, we report the influence of auditory input on sensory encoding in the somatosensory cortex and show its impact on goal-directed behavior. Monosynaptic input from the auditory cortex enhanced dendritic and somatic encoding of tactile stimulation in layer 2/3 (L2/3), but not layer 5 (L5), pyramidal neurons in forepaw somatosensory cortex (S1). During a tactile-based goal-directed task, auditory input increased dendritic activity and reduced reaction time, which was abolished by photoinhibition of auditory cortex projections to forepaw S1. Taken together, these results indicate that dendrites of L2/3 pyramidal neurons encode multisensory information, leading to enhanced neuronal output and reduced response latency during goal-directed behavior.


Sign in / Sign up

Export Citation Format

Share Document