listening behavior
Recently Published Documents


TOTAL DOCUMENTS

90
(FIVE YEARS 23)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Takahiro Hiraga ◽  
Yasufumi Yamada ◽  
Ryo Kobayashi

Bats perceive the three-dimensional (3D) environment by emitting ultrasound pulses from their nose or mouth and receiving echoes through both ears. To detect the position of a target object, it is necessary to know the distance and direction of the target. Certain bat species synchronize the movement of their pinnae with pulse emission, and it is this behavior that enables 3D direction detection. However, the significance of bats’ ear motions remains unclear. In this study, we construct a model of an active listening system including the motion of the ears, and conduct mathematical investigations to clarify the importance of ear motion in 3D direction detection. The theory suggests that only certain ear motions, namely three-axis rotation, accomplish accurate and robust 3D direction detection. Our theoretical analysis also strongly supports the behavior whereby bats move their pinnae in the antiphase mode. In addition, we provide the conditions for ear motions to ensure accurate and robust direction detection, suggesting that simple shaped hearing directionality and well-selected uncomplicated ear motions are sufficient to achieve precise and robust 3D direction detection. Our findings and mathematical approach have the potential to be used in the design of active sensing systems in various engineering fields.


2021 ◽  
pp. 62-91
Author(s):  
Sylvia Sierra

This chapter presents qualitative analysis of examples across five everyday conversations where Millennial friends in their late twenties display their understanding, engagement with, and appreciation of intertextual media references through different displays of listening. In addition to an analysis of how participants engage as listeners with media references is an analysis of playback data with participants which provides insights into their listening behavior. The listening displays analyzed in this study are minimal responses, repetition of a media text, laughter and smiling, and participation in a play frame. This chapter shows that through engaging in a play frame based on media references, participants construct a shared group identity around their shared experience and knowledge of media references. Understanding of displays of shared cultural knowledge in intertextual processes is advanced in this chapter; it also shows how engagement with media references can be used to construct shared group identities.


Author(s):  
Erin R. O'Neill ◽  
John D. Basile ◽  
Peggy Nelson

Purpose The goal of this study was to assess the listening behavior and social engagement of cochlear implant (CI) users and normal-hearing (NH) adults in daily life and relate these actions to objective hearing outcomes. Method Ecological momentary assessments (EMAs) collected using a smartphone app were used to probe patterns of listening behavior in CI users and age-matched NH adults to detect differences in social engagement and listening behavior in daily life. Participants completed very short surveys every 2 hr to provide snapshots of typical, everyday listening and socializing, as well as longer, reflective surveys at the end of the day to assess listening strategies and coping behavior. Speech perception testing, with accompanying ratings of task difficulty, was also performed in a lab setting to uncover possible correlations between objective and subjective listening behavior. Results Comparisons between speech intelligibility testing and EMA responses showed poorer performing CI users spending more time at home and less time conversing with others than higher performing CI users and their NH peers. Perception of listening difficulty was also very different for CI users and NH listeners, with CI users reporting little difficulty despite poor speech perception performance. However, both CI users and NH listeners spent most of their time in listening environments they considered “not difficult.” CI users also reported using several compensatory listening strategies, such as visual cues, whereas NH listeners did not. Conclusion Overall, the data indicate systematic differences between how individual CI users and NH adults navigate and manipulate listening and social environments in everyday life.


PLoS Biology ◽  
2021 ◽  
Vol 19 (10) ◽  
pp. e3001410
Author(s):  
Mohsen Alavash ◽  
Sarah Tune ◽  
Jonas Obleser

In multi-talker situations, individuals adapt behaviorally to the listening challenge mostly with ease, but how do brain neural networks shape this adaptation? We here establish a long-sought link between large-scale neural communications in electrophysiology and behavioral success in the control of attention in difficult listening situations. In an age-varying sample of N = 154 individuals, we find that connectivity between intrinsic neural oscillations extracted from source-reconstructed electroencephalography is regulated according to the listener’s goal during a challenging dual-talker task. These dynamics occur as spatially organized modulations in power-envelope correlations of alpha and low-beta neural oscillations during approximately 2-s intervals most critical for listening behavior relative to resting-state baseline. First, left frontoparietal low-beta connectivity (16 to 24 Hz) increased during anticipation and processing of spatial-attention cue before speech presentation. Second, posterior alpha connectivity (7 to 11 Hz) decreased during comprehension of competing speech, particularly around target-word presentation. Connectivity dynamics of these networks were predictive of individual differences in the speed and accuracy of target-word identification, respectively, but proved unconfounded by changes in neural oscillatory activity strength. Successful adaptation to a listening challenge thus latches onto 2 distinct yet complementary neural systems: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state and an alpha-tuned posterior network supporting attention to speech.


Author(s):  
Natalie Alexa Roese ◽  
Julia Merrill

The current study investigated how music has been used during the COVID-19 pandemic and how personal factors have affected music-listening behavior. During the shutdown in Spring 2020 in Germany, 539 participants took part in an online survey reporting on functions of music listening, attributes of listened music, and active engagement with music, retrospectively before and during the pandemic. Next to these implicit questions, participants were asked to describe the changes they explicitly noticed in handling music during COVID-19, their current worries, and their new everyday life during the pandemic as well as personality traits and stress reactivity. A logistic regression model was fitted, showing that people reduced their active engagement with music during the lockdown, and the function of killing time and overcoming loneliness became more important, reflecting the need for distraction and filling the silence. Before the lockdown, music was listened to for the function of motor synchronization and enhanced well-being, which reflects how people have lost both their musical and activity routines during the lockdown. The importance of in-person engagement with music in people’s lives became particularly evident in the connection between worries about further restrictions and the need for live music.


2021 ◽  
Author(s):  
Lawrence Spear ◽  
Ashlee Milton ◽  
Garrett Allen ◽  
Amifa Raj ◽  
Michael Green ◽  
...  

2021 ◽  
Author(s):  
Vanessa C. Irsik ◽  
Ingrid Johnsrude ◽  
Bjorn Herrmann

Fluctuating background masking sounds facilitate speech intelligibility by providing speech ‘glimpses’ (masking release). Older adults benefit less from glimpses, but masking release is typically investigated using isolated sentences. Recent work indicates that naturalistic speech (spoken stories) may qualitatively alter speech-in-noise listening. Moreover, neural sensitivity to different amplitude envelopes profiles (ramped vs. damped) changes with age, but whether this impacts speech listening is unknown. In three experiments, we investigate how masking release in younger and older adults differs for masked disconnected sentences and stories, and how intelligibility varies with masker temporal profile. Intelligibility was generally greater for damped compared to ramped maskers for both age groups and speech types. Masking release was reduced in older relative to younger adults for disconnected sentences (Experiment 1), and stories with a randomized sentence order (Experiment 3). When listening to stories with a coherent narrative, older adults demonstrated equal (Experiment 3) or greater (Experiment 2) masking release compared to younger adults. Reduced masking release previously observed in older adults does not appear to generalize to sounds with an engaging, connected narrative: this reinforces the idea that the listening materials qualitatively change listening behavior and that standard intelligibility paradigms may underestimate speech-listening abilities in older adults.


2021 ◽  
Vol 8 ◽  
Author(s):  
Catharine Oertel ◽  
Patrik Jonell ◽  
Dimosthenis Kontogiorgos ◽  
Kenneth Funes Mora ◽  
Jean-Marc Odobez ◽  
...  

Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the human-human dialogue, are also beneficial for the perception of a robot in multi-party human-robot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant’s perception of the robot, his behavior as well as the perception of third-party observers.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ilana Harris ◽  
Ian Cross

Musical Group Interaction (MGI) has been found to promote prosocial tendencies, including empathy, across various populations. However, experimental study is lacking in respect of effects of everyday forms of musical engagement on prosocial tendencies, as well as whether key aspects—such as physical co-presence of MGI participants—are necessary to enhance prosocial tendencies. We developed an experimental procedure in order to study online engagement with collaborative playlists and to investigate socio-cognitive components of prosocial tendencies expected to increase as a consequence of engagement. We aimed to determine whether mere perceived presence of a partner during playlist-making could elicit observable correlates of social processing implicated in both MGI and prosocial behaviors more generally and identify the potential roles of demographic, musical, and inter-individual differences. Preliminary results suggest that for younger individuals, some of the social processes involved in joint music-making and implicated in empathic processes are likely to be elicited even by an assumption of virtual co-presence. In addition, individual differences in styles of listening behavior may mediate the effects of mere perceived partner presence on recognition memory.


2021 ◽  
Author(s):  
Mohsen Alavash ◽  
Sarah Tune ◽  
Jonas Obleser

AbstractIn multi-talker situations individuals adapt behaviorally to the listening challenge mostly with ease, but how do brain neural networks shape this adaptation? We here establish a long-sought link between large-scale neural communications in electrophysiology and behavioral success in the control of attention in challenging listening situations. In an age-varying sample of N = 154 individuals, we find that connectivity between intrinsic neural oscillations extracted from source-reconstructed electroencephalography is top-down regulated during a challenging dual-talker listening task. These dynamics emerge as spatially organized modulations in power-envelope correlations of alpha and low-beta neural oscillations during ~2 seconds intervals most critical for listening behavior relative to resting-state baseline. First, left frontoparietal low-beta connectivity (16-24 Hz) increased during anticipation and processing of spatial-attention cue before speech presentation. Second, posterior alpha connectivity (7-11 Hz) decreased during comprehension of competing speech, particularly around target-word presentation. Connectivity dynamics of these networks were predictive of individual differences in the speed and accuracy of target-word identification, respectively, but proved unconfounded by changes in neural oscillatory activity strength. Successful adaptation to a listening challenge thus latches onto two distinct yet complementary neural systems: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state and an alpha-tuned posterior network supporting attention to speech.Significance StatementAttending to relevant information during listening is key to human communication. How does this adaptive behavior rely upon neural communications? We here follow up on the long-standing conjecture that, large-scale brain network dynamics constrain our successful adaptation to cognitive challenges. We provide evidence in support of two intrinsic, frequency-specific neural networks that underlie distinct behavioral aspects of successful listening: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state, and an alpha-tuned posterior cortical network supporting attention to speech. These findings shed light on how large-scale neural communication dynamics underlie attentive listening and open new opportunities for brain network-based intervention in hearing loss and its neurocognitive consequences.


Sign in / Sign up

Export Citation Format

Share Document