Without it no music: beat induction as a fundamental musical trait

2012 ◽  
Vol 1252 (1) ◽  
pp. 85-91 ◽  
Author(s):  
Henkjan Honing
Keyword(s):  
2002 ◽  
Vol 66 (1) ◽  
pp. 26-39 ◽  
Author(s):  
McAngus N. Todd ◽  
C. Lee ◽  
D. O'Boyle

1999 ◽  
Vol 28 (1) ◽  
pp. 5-28 ◽  
Author(s):  
Neil P. McAngus Todd ◽  
Donald J. O'Boyle ◽  
Christopher S. Lee

2000 ◽  
Vol 18 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Carolyn Drake ◽  
Amandine Penel ◽  
Emmanuel Bigand

We investigate how the presence of performance microstructure (small variations in timing, intensity, and articulation) influences listeners' perception of musical excerpts, by measuring the way in which listeners synchronize with the excerpts. Musicians and nonmusicians tapped on a drum in synchrony with six musical excerpts, each presented in three versions: mechanical (synthesized from the score, without microstructure), accented (mechanical, with intensity accents), and expressive (performed by a concert pianist, with all types of microstructure). Participants' synchronizations with these excerpts were characterized in terms of three processes described in Mari Riess Jones's Dynamic Attending Theory: attunement (ease of synchronization), use of a referent level (spontaneous synchronization rate), and focal attending (range of synchronization levels). As predicted by beat induction models, synchronization was better with the temporally regular mechanical and accented versions than with the expressive versions. However, synchronization with expressive versions occurred at higher (slower) levels, within a narrower range of synchronization levels, and corresponded more frequently to the theoretically correct metrical hierarchy. We conclude that performance microstructure transmits a particular metrical interpretation to the listener and enables the perceptual organization of events over longer time spans. Compared with nonmusicians, musicians synchronized more accurately (heightened attunement), tapped more slowly (slower referent level), and used a wider range of hierarchical levels when instructed (enhanced focal attending), more often corresponding to the theoretically correct metrical hierarchy. We conclude that musicians perceptually organize events over longer time spans and have a more complete hierarchical representation of the music than do nonmusicians.


2006 ◽  
Vol 24 (2) ◽  
pp. 177-188 ◽  
Author(s):  
Fabien Gouyon ◽  
Gerhard Widmer ◽  
Xavier Serra ◽  
Arthur Flexer

This article brings forward the question of which acoustic features are the most adequate for identifying beats computationally in acoustic music pieces. We consider many different features computed on consecutive short portions of acoustic signal, among which those currently promoted in the literature on beat induction from acoustic signals and several original features, unmentioned in this literature. Evaluation of feature sets regarding their ability to provide reliable cues to the localization of beats is based on a machine learning methodology with a large corpus of beat-annotated music pieces, in audio format, covering distinctive music categories. Confirming common knowledge, energy is shown to be a very relevant cue to beat induction (especially the temporal variation of energy in various frequency bands, with the special relevance of frequency bands below 500 Hz and above 5 kHz). Some of the new features proposed in this paper are shown to outperform features currently promoted in the literature on beat induction from acoustic signals.We finally hypothesize that modeling beat induction may involve many different, complementary acoustic features and that the process of selecting relevant features should partly depend on acoustic properties of the very signal under consideration.


Sign in / Sign up

Export Citation Format

Share Document