Representing groove: Rhythmic structure in interactive music performance

1997 ◽  
Vol 102 (5) ◽  
pp. 3182-3182
Author(s):  
Vijay Iyer ◽  
Jeff Bilmes ◽  
Matt Wright ◽  
David Wessel
2009 ◽  
Vol 33 (4) ◽  
pp. 69-82 ◽  
Author(s):  
Dan Overholt ◽  
John Thompson ◽  
Lance Putnam ◽  
Bo Bell ◽  
Jim Kleban ◽  
...  

2005 ◽  
Vol 23 (1) ◽  
pp. 79-85 ◽  
Author(s):  
HENKJAN HONING

THE RELATION BETWEEN MUSIC and motion has been a topic of much theoretical and empirical research. An important contribution is made by a family of computational theories, so-called kinematic models, that propose an explicit relation between the laws of physical motion in the real world and expressive timing in music performance. However, kinematic models predict that expressive timing is independent of (a) the number of events, (b) the rhythmic structure, and (c) the overall tempo of the performance. These factors have no effect on the predicted shape of a ritardando. Computer simulations of a number of rhythm perception models show, however, a large effect of these structural and temporal factors. They are therefore proposed as a perception-based alternative to the kinematic approach.


2009 ◽  
Vol 14 (2) ◽  
pp. 197-207 ◽  
Author(s):  
Georg Essl ◽  
Michael Rohs

Mobile phones offer an attractive platform for interactive music performance. We provide a theoretical analysis of the sensor capabilities via a design space and show concrete examples of how different sensors can facilitate interactive performance on these devices. These sensors include cameras, microphones, accelerometers, magnetometers and multitouch screens. The interactivity through sensors in turn informs aspects of live performance as well as composition though persistence, scoring, and mapping to musical notes or abstract sounds.


2020 ◽  
pp. 86-88
Author(s):  
Rafael Ramirez ◽  
Sergio Giraldo ◽  
Zacharias Vamvakousis

Active music listening is a way of listening to music through active interactions. In this paper we present an expressive brain-computer interactive music system for active music listening, which allows listeners to manipulate expressive parameters in music performances using their emotional state, as detected by a brain-computer interface. The proposed system is divided in two parts: a real-time system able to detect listeners’ emotional state from their EEG data, and a real-time expressive music performance system capable of adapting the expressive parameters of music based on the detected listeners’ emotion. We comment on an application of our system as a music neurofeedback system to alleviate depression in elderly people.


Leonardo ◽  
2014 ◽  
Vol 47 (3) ◽  
pp. 260-261
Author(s):  
Roger T. Dean

Serial music, which is mainly non-tonal, superimposes compositional freedom onto an unusually rigorous process of pitch-sequence transformations based on ‘tone rows’: a row is usually a sequence of notes using each of the 12 chromatic pitches once. Compositional freedom comprises forming chords from the sequences, and in multi-strand music, also in simultaneously presenting different segments of pitch-sequences. The present project coded a real-time serial music composer for automatic or interactive music performance. This Serial Keyboardist Collaborator can perform keyboard music which is impossible for a human to realize. Surprisingly, it was also useful in making more tonal music based on the same rigorous pitch-sequence generation.


Sign in / Sign up

Export Citation Format

Share Document