scholarly journals Linguistic Information in Auditory Dynamic Events Contributes to the Detection of Fine, Not Coarse Event Boundaries

2017 ◽  
Author(s):  
Frank Papenmeier ◽  
Annika Maurer ◽  
Markus Huff

Background. Human observers segment dynamic information into discrete events. That is, although there is continuous sensory information, comprehenders perceive boundaries between two meaningful units of information. In narrative comprehension comprehenders use linguistic, non-linguistic, and physical cues for this event boundary perception. Yet, it is an open question–both from a theoretical and an empirical perspective–how linguistic and non-linguistic cues contribute to this process. The current study explores how linguistic cues contribute to participants’ ability to segment continuous auditory information into discrete, hierarchically structured events. Methods. Native speakers of German and non-native speakers, who neither spoke nor understood German, segmented a German audio drama into coarse and fine events. Whereas native participants could make use of linguistic, non-linguistic, and physical cues for segmentation, non-native participants could only use non-linguistic and physical cues. We analyzed segmentation behavior in terms of the ability to identify coarse and fine event boundaries and the resulting hierarchical structure. Results. Non-native listeners identified essentially the same coarse event boundaries as native listeners but missed some of the fine event boundaries identified by the native listeners. Interestingly, hierarchical event perception (as measured with hierarchical alignment and enclosure) was comparable for native and non-native participants. Discussion. In summary, linguistic cues contributed particularly to the identification of certain fine event boundaries. The results are discussed with regard to the current theories of event cognition.

2009 ◽  
Vol 26 (5) ◽  
pp. 415-425 ◽  
Author(s):  
Janeen D. Loehr ◽  
Caroline Palmer

THE CURRENT STUDY EXAMINED HOW AUDITORY AND kinematic information influenced pianists' ability to synchronize musical sequences with a metronome. Pianists performed melodies in which quarter-note beats were subdivided by intervening eighth notes that resulted from auditory information (heard tones), motor production (produced tones), both, or neither. Temporal accuracy of performance was compared with finger trajectories recorded with motion capture. Asynchronies were larger when motor or auditory sensory information occurred between beats; auditory information yielded the largest asynchronies. Pianists were sensitive to the timing of the sensory information; information that occurred earlier relative to the midpoint between metronome beats was associated with larger asynchronies on the following beat. Finger motion was influenced only by motor production between beats and indicated the influence of other fingers' motion. These findings demonstrate that synchronization accuracy in music performance is influenced by both the timing and modality of sensory information that occurs between beats.


2021 ◽  
Author(s):  
Hongmi Lee ◽  
Janice Chen

ABSTRACTHuman life consists of a multitude of diverse and interconnected events. However, extant research has focused on how humans segment and remember discrete events from continuous input, with far less attention given to how the structure of connections between events impacts memory. We conducted an fMRI study in which subjects watched and recalled a series of realistic audiovisual narratives. By transforming narratives into networks of events, we found that more central events—those with stronger semantic or causal connections to other events—were better remembered. During encoding, central events evoked larger hippocampal event boundary responses associated with memory consolidation. During recall, high centrality predicted stronger activation in cortical areas involved in episodic recollection, and more similar neural representations across individuals. Together, these results suggest that when humans encode and retrieve complex real-world experiences, the reliability and accessibility of memory representations is shaped by their location within a network of events.


2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


2008 ◽  
Vol 275 (1643) ◽  
pp. 1645-1651 ◽  
Author(s):  
George Mather

Fast-moving sports such as tennis require both players and match officials to make rapid accurate perceptual decisions about dynamic events in the visual world. Disagreements arise regularly, leading to disputes about decisions such as line calls. A number of factors must contribute to these disputes, including lapses in concentration, bias and gamesmanship. Fundamental uncertainty or variability in the sensory information supporting decisions must also play a role. Modern technological innovations now provide detailed and accurate physical information that can be compared against the decisions of players and officials. The present paper uses this psychophysical data to assess the significance of perceptual limitations as a contributor to real-world decisions in professional tennis. A detailed analysis is presented of a large body of data on line-call challenges in professional tennis tournaments over the last 2 years. Results reveal that the vast majority of challenges can be explained in a direct highly predictable manner by a simple model of uncertainty in perceptual information processing. Both players and line judges are remarkably accurate at judging ball bounce position, with a positional uncertainty of less than 40 mm. Line judges are more reliable than players. Judgements are more difficult for balls bouncing near base and service lines than those bouncing near side and centre lines. There is no evidence for significant errors in localization due to image motion.


2020 ◽  
Vol 24 (1) ◽  
pp. 317-339
Author(s):  
Candido ◽  
Ricardo ◽  
Bruna

The purpose of the present study was to contribute to current documented evidence of the challenges imposed by inflectional morphology in second language acquisition. We conducted two speeded acceptability judgment tasks with Brazilian Portuguese-English bilinguals with different linguistic profiles. We analyzed their behavior with respect to grammatical and ungrammatical sentences in English involving inflectional morphology. Our results suggested that the bilingual speakers differed from English native speakers only with respect to the sentences with missing inflectional morphemes regardless of proficiency level and immersion status. We understand these findings as an indication that difficulty with functional morphology involves perceptual salience and possibly learned attention to linguistic cues.


2021 ◽  
Author(s):  
Lynn J Lohnas ◽  
Karl Healey ◽  
Lila Davachi

Although life unfolds continuously, experiences are generally perceived and remembered as discrete events. Accumulating evidence suggests that event boundaries disrupt temporal representations and weaken memory associations. However, less is known about the consequences of event boundaries on temporal representations during retrieval, especially when temporal information is not tested explicitly. Using a neural measure of temporal context extracted from scalp electroencephalography, we found reduced temporal context similarity between studied items separated by an event boundary when compared to items from the same event. Further, while participants free recalled list items, neural activity reflected reinstatement of temporal context representations from study, including temporal disruption. A computational model of episodic memory, the Context Maintenance and Retrieval model (CMR; Polyn, Norman & Kahana, 2009), predicted these results, and made novel predictions regarding the influence of temporal disruption on recall order. These findings implicate the impact of event structure on memory organization via temporal representations.


Author(s):  
Anita Senthinathan ◽  
Scott Adams ◽  
Allyson D. Page ◽  
Mandar Jog

Purpose Hypophonia (low speech intensity) is the most common speech symptom experienced by individuals with Parkinson's disease (IWPD). Previous research suggests that, in IWPD, there may be abnormal integration of sensory information for motor production of speech intensity. In the current study, intensity of auditory feedback was systematically manipulated (altered in both positive and negative directions) during sensorimotor conditions that are known to modulate speech intensity in everyday contexts in order to better understand the role of auditory feedback for speech intensity regulation. Method Twenty-six IWPD and 24 neurologically healthy controls were asked to complete the following tasks: converse with the experimenter, start vowel production, and read sentences at a comfortable loudness, while hearing their own speech intensity randomly altered. Altered intensity feedback conditions included 5-, 10-, and 15-dB reductions and increases in the feedback intensity. Speech tasks were completed in no noise and in background noise. Results IWPD displayed a reduced response to the altered intensity feedback compared to control participants. This reduced response was most apparent when participants were speaking in background noise. Specific task-based differences in responses were observed such that the reduced response by IWPD was most pronounced during the conversation task. Conclusions The current study suggests that IWPD have abnormal processing of auditory information for speech intensity regulation, and this disruption particularly impacts their ability to regulate speech intensity in the context of speech tasks with clear communicative goals (i.e., conversational speech) and speaking in background noise.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Mónica Domínguez ◽  
Mireia Farrús ◽  
Leo Wanner

AbstractThe correspondence between the communicative intention of a speaker in terms of Information Structure and the way this speaker reflects communicative aspects by means of prosody have been a fruitful field of study in Linguistics. However, text-to-speech applications still lack the variability and richness found in human speech in terms of how humans display their communication skills. Some attempts were made in the past to model one aspect of Information Structure, namelythematicityfor its application to intonation generation in text-to-speech technologies. Yet, these applications suffer from two limitations: (i) they draw upon a small number of made-up simple question-answer pairs rather than on real (spoken or written) corpus material; and (ii) they do not explore whether any other interpretation would better suit a wider range of textual genres beyond dialogs. In this paper, two different interpretations of thematicity in the field of speech technologies are examined: the state-of-art binary (and flat) theme-rheme, and the hierarchical thematicity defined by Igor Mel’čuk within the Meaning-Text Theory. The outcome of the experiments on a corpus of native speakers of US English suggests that the latter interpretation of thematicity has a versatile implementation potential for text-to-speech applications of theInformation Structure–prosodyinterface.


2020 ◽  
Author(s):  
Sakshi Bhatia ◽  
Samar Husain

The effective use of preverbal linguistic cues to make successful clause-final verbal prediction as well as robust prediction maintenance has been argued to be a cross-linguistic generalization for SOV languages such as German and Japanese. In this paper, we show that native speakers of Hindi (an SOV language) falter in maintaining clause-final verbal predictions in the presence of a center-embedded relative clause with a non-canonical word order. The fallibility of the parser is illustrated by the formation of a grammatically illicit locally coherent parse as well as by poor comprehension accuracy. Our investigations suggest that while plausibility is essential, presence of overt agreement features might not be necessary for forming a locally coherent parse in Hindi. The work highlights how top-down processing and bottom-up information interact during sentence comprehension in SOV languages – comprehension suffers with increased complexity of the preverbal linguistic environment.


2017 ◽  
Author(s):  
Markus Huff ◽  
Annika Maurer ◽  
Irina Rebecca Brich ◽  
Anne Pagenkopf ◽  
Florian Wickelmaier ◽  
...  

Humans segment the continuous stream of sensory information into distinct events at points of change. Between two events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from five experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in the light of current event cognition theories.


Sign in / Sign up

Export Citation Format

Share Document