Effects of metrical dissonance and expertise on perceived emotion in Schumann’s Carnaval

2019 ◽  
pp. 102986491983670
Author(s):  
Jessica Sommer ◽  
Kimberly Simmons ◽  
Daphne Tan
Neuroreport ◽  
1997 ◽  
Vol 8 (3) ◽  
pp. 623-627 ◽  
Author(s):  
Hans Pihan ◽  
Eckart Altenmüller ◽  
Hermann Ackermann

2021 ◽  
pp. 276-302
Author(s):  
Mark Gotham

Metrical dissonance is a powerful tool for creating and manipulating musical tension. The relative extent of tension can be more or less acute depending (in part) on the type of dissonance used and moving among those dissonance types can contribute to the shape of a musical work. This chapter sets out a model for quantifying relative dissonance that incorporates experimentally substantiated principles of cognitive science. A supplementary webpage [**html page] provides an interactive guide for testing out these ideas, and a further online supplement [**URL—included in the main text as Section \ref{sec:online}] provides mathematical formalizations for the principles discussed. We begin with a basic model of metre where a metrical position’s weight is given simply by the number of pulse levels coinciding there. This alone enables a telling categorization of displacement dissonances for simple metres and a first sense of the relative differences between them. These arbitrary weighting ‘values’ are then refined on the basis of tempo and pulse salience. This provides a more subtle set of gradations that reflect the cognitive experience of metre somewhat better while still retaining a clear sense of the simple principles that govern relative dissonance. Additionally, this chapter sees the model applied in a brief, illustrative analysis and in a preliminary extension to ‘mixed’ metres (5s, 7s,…). This sheds light on known problems such as the relative stability of mixed metres in different rotations, and suggests a new way of thinking about mixed metres’ relative susceptibility to metrical dissonance.


2020 ◽  
Vol 34 (02) ◽  
pp. 1342-1350 ◽  
Author(s):  
Uttaran Bhattacharya ◽  
Trisha Mittal ◽  
Rohan Chandra ◽  
Tanmay Randhavane ◽  
Aniket Bera ◽  
...  

We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-GCN) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to classify the perceived emotion of the human into one of four emotions: happy, sad, angry, or neutral. We train STEP on annotated real-world gait videos, augmented with annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE). We incorporate a novel push-pull regularization loss in the CVAE formulation of STEP-Gen to generate realistic gaits and improve the classification accuracy of STEP. We also release a novel dataset (E-Gait), which consists of 4,227 human gaits annotated with perceived emotions along with thousands of synthetic gaits. In practice, STEP can learn the affective features and exhibits classification accuracy of 88% on E-Gait, which is 14–30% more accurate over prior methods.


Author(s):  
Alice Baird ◽  
Emilia Parada-Cabaleiro ◽  
Cameron Fraser ◽  
Simone Hantke ◽  
Björn Schuller
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document