scholarly journals Subdividing the Beat: Auditory and Motor Contributions to Synchronization

2009 ◽  
Vol 26 (5) ◽  
pp. 415-425 ◽  
Author(s):  
Janeen D. Loehr ◽  
Caroline Palmer

THE CURRENT STUDY EXAMINED HOW AUDITORY AND kinematic information influenced pianists' ability to synchronize musical sequences with a metronome. Pianists performed melodies in which quarter-note beats were subdivided by intervening eighth notes that resulted from auditory information (heard tones), motor production (produced tones), both, or neither. Temporal accuracy of performance was compared with finger trajectories recorded with motion capture. Asynchronies were larger when motor or auditory sensory information occurred between beats; auditory information yielded the largest asynchronies. Pianists were sensitive to the timing of the sensory information; information that occurred earlier relative to the midpoint between metronome beats was associated with larger asynchronies on the following beat. Finger motion was influenced only by motor production between beats and indicated the influence of other fingers' motion. These findings demonstrate that synchronization accuracy in music performance is influenced by both the timing and modality of sensory information that occurs between beats.

2009 ◽  
Vol 26 (5) ◽  
pp. 439-449 ◽  
Author(s):  
Caroline Palmer ◽  
Erik Koopmans ◽  
Janeen D. Loehr ◽  
Christine Carter

SENSORY INFORMATION AVAILABLE WHEN MUSICIANS' fingers arrive on instrument keys contributes to temporal accuracy in piano performance (Goebl & Palmer, 2008). The hypothesis that timing accuracy is related to sensory (tactile) information available at finger-key contact was extended to clarinetists' finger movements during key depressions and releases that, together with breathing, determine the timing of tone onsets. Skilled clarinetists performed melodies at different tempi in a synchronization task while their movements were recorded with motion capture. Finger accelerations indicated consistent kinematic landmarks when fingers made initial contact with or release from the key surface. Performances that contained more kinematic landmarks had reduced timing error. The magnitude of finger accelerations on key contact and release was positively correlated with increased temporal accuracy during the subsequent keystroke. These findings suggest that sensory information available at finger-key contact enhances the temporal accuracy of music performance.


2014 ◽  
Vol 15 (4) ◽  
Author(s):  
Jonathan Sinclair ◽  
Hayley Vincent ◽  
Paul John Taylor ◽  
Jack Hebron ◽  
Howard Thomas Hurst ◽  
...  

AbstractPurpose. Cycling has been shown to be associated with a high incidence of chronic pathologies. Foot orthoses are frequently used by cyclists in order to reduce the incidence of chronic injuries. The aim of the current investigation was to examine the influence of different varus orthotic inclines on the three-dimensional kinematics of the lower extremities during the pedal cycle. Methods. Kinematic information was obtained from ten male cyclists using an eight-camera optoelectronic 3-D motion capture system operating at 250 Hz. Participants cycled with and without orthotic intervention at three different cadences (70, 90 and 110 RPM). The orthotic device was adjustable and four different wedge conditions (0 mm - no orthotic, 1.5 mm, 3.0 mm and 4.5 mm) were examined. Two-way repeated measures ANOVAs were used to compare the kinematic parameters obtained as a function of orthotic inclination and cadence. Participants were also asked to subjectively rate their comfort in cycling using each of the four orthotic devices on a 10-point Likert scale. Results. The kinematic analysis indicated that the orthotic device had no significant influence at any of the three cadences. Analysis of subjective preferences showed a clear preference for the 0 mm, no orthotic, condition. Conclusions. This study suggests that foot orthoses do not provide any protection from skeletal malalignment issues associated with the aetiology of chronic cycling injuries.


2017 ◽  
Author(s):  
Frank Papenmeier ◽  
Annika Maurer ◽  
Markus Huff

Background. Human observers segment dynamic information into discrete events. That is, although there is continuous sensory information, comprehenders perceive boundaries between two meaningful units of information. In narrative comprehension comprehenders use linguistic, non-linguistic, and physical cues for this event boundary perception. Yet, it is an open question–both from a theoretical and an empirical perspective–how linguistic and non-linguistic cues contribute to this process. The current study explores how linguistic cues contribute to participants’ ability to segment continuous auditory information into discrete, hierarchically structured events. Methods. Native speakers of German and non-native speakers, who neither spoke nor understood German, segmented a German audio drama into coarse and fine events. Whereas native participants could make use of linguistic, non-linguistic, and physical cues for segmentation, non-native participants could only use non-linguistic and physical cues. We analyzed segmentation behavior in terms of the ability to identify coarse and fine event boundaries and the resulting hierarchical structure. Results. Non-native listeners identified essentially the same coarse event boundaries as native listeners but missed some of the fine event boundaries identified by the native listeners. Interestingly, hierarchical event perception (as measured with hierarchical alignment and enclosure) was comparable for native and non-native participants. Discussion. In summary, linguistic cues contributed particularly to the identification of certain fine event boundaries. The results are discussed with regard to the current theories of event cognition.


2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


2016 ◽  
Vol 842 ◽  
pp. 293-302
Author(s):  
Phan Gia Hoang ◽  
H. Asif ◽  
Domenico Campolo

In recent years, robots have found extensive applications in automating repetitive, defined, position dependent tasks such as painting and material handling. However, continuous contact type tasks (such as finishing, deburring and grinding) that require both position and force control are still carried out manually by skilled labor. Majorly, because it is difficult to program experienced user skills in a robotic setup without having clear knowledge of underlying model used by the operators. In this paper we present a preparation for capturing human operator’s dynamics using an instrumented hand-held tool and a motion capture setup. We first present the design of an instrumented tool and later present a method for reliably capturing kinematics using redundant markers removing effects of marker occlusions, and effect of gravity caused by the tool's mass. Kinematic information is used for deriving the forces/torques on the tool end effector.


2009 ◽  
Vol 26 (5) ◽  
pp. 475-488 ◽  
Author(s):  
Steven R. Livingstone ◽  
William Forde Thompson ◽  
Frank A. Russo

FACIAL EXPRESSIONS ARE USED IN MUSIC PERFORMANCE to communicate structural and emotional intentions. Exposure to emotional facial expressions also may lead to subtle facial movements that mirror those expressions. Seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing. Four different participants were recorded using facial electromyography (EMG) while performing the same task. Participants saw and heard recordings of musical phrases sung with happy, sad, and neutral emotional connotations. They then imitated the target stimulus, paying close attention to the emotion expressed. Facial expressions were monitored during four epochs: (a) during the target; (b) prior to their imitation; (c) during their imitation; and (d) after their imitation. Expressive activity was observed in all epochs, implicating a role of facial expressions in the perception, planning, production, and post-production of emotional singing.


Author(s):  
Anita Senthinathan ◽  
Scott Adams ◽  
Allyson D. Page ◽  
Mandar Jog

Purpose Hypophonia (low speech intensity) is the most common speech symptom experienced by individuals with Parkinson's disease (IWPD). Previous research suggests that, in IWPD, there may be abnormal integration of sensory information for motor production of speech intensity. In the current study, intensity of auditory feedback was systematically manipulated (altered in both positive and negative directions) during sensorimotor conditions that are known to modulate speech intensity in everyday contexts in order to better understand the role of auditory feedback for speech intensity regulation. Method Twenty-six IWPD and 24 neurologically healthy controls were asked to complete the following tasks: converse with the experimenter, start vowel production, and read sentences at a comfortable loudness, while hearing their own speech intensity randomly altered. Altered intensity feedback conditions included 5-, 10-, and 15-dB reductions and increases in the feedback intensity. Speech tasks were completed in no noise and in background noise. Results IWPD displayed a reduced response to the altered intensity feedback compared to control participants. This reduced response was most apparent when participants were speaking in background noise. Specific task-based differences in responses were observed such that the reduced response by IWPD was most pronounced during the conversation task. Conclusions The current study suggests that IWPD have abnormal processing of auditory information for speech intensity regulation, and this disruption particularly impacts their ability to regulate speech intensity in the context of speech tasks with clear communicative goals (i.e., conversational speech) and speaking in background noise.


Sign in / Sign up

Export Citation Format

Share Document