How are Music and Emotion Links Studied?

2019 ◽  
pp. 139-146
Author(s):  
Patrik N. Juslin

This chapter considers ways to establish links between musical features and specific emotions. The first step is usually to conduct an experiment, in which listeners rate the emotional expression of different excerpts of music — either music from commercial recordings or pieces created specifically for the study. The next step is to extract musical features associated with emotion categories. This can be done in four ways: analyzing the musical score of the pieces; relying on experts, such as music theorists and musicians, asking them to rate various aspects of the musical structure; measuring acoustic parameters of the music (e.g. sound level, timing, frequency spectrum of the timbre), using dedicated computer software; and manipulating specific musical features in synthesized (computerized) performances, to evaluate how they influence a listener's judgments of emotional expression.

2021 ◽  
Vol 21 (1) ◽  
pp. 369-394
Author(s):  
Ivan Simurra ◽  
Rodrigo Borges

We report a music analysis study of Atmosphères (1961) from György Ligeti, combining symbolic information retrieved from the musical score and audio descriptors extracted from the audio recording. The piece was elected according to the following criteria: (a) it is a music composition based on sound transformations associated to motions on the global timbre; (b) its conceptual creative intercourse makes direct references to electronic music and sound/timbre techniques from the ancient Renaissance Music; and (c) its sonorities are explored by means of variations on the timbre contrast. From the symbolic analysis perspective, Atmosphères’ timbre content can be discussed considering the entanglement of individual characteristics of musical instruments. The computational method approaches the musical structure from an empirical perspective and is based on clustering techniques. We depart from previous studies, and this time we focus on the novelty curve calculated from the spectral content extracted from the piece recording. Our findings indicate that novelty curve can be associate with five specific clusters, and regarding the symbolic music analysis, three leading music features can be argued: (a) instrumentation changes; (b) distinct pitch chromatic set locations and (c) intensity dynamic fluctuations.


2008 ◽  
Vol 26 (2) ◽  
pp. 103-119 ◽  
Author(s):  
Ginevra Castellano ◽  
Marcello Mortillaro ◽  
Antonio Camurri ◽  
Gualtiero Volpe ◽  
Klaus Scherer

EMOTIONAL EXPRESSION IN MUSIC PERFORMANCE includes important cues arising from the body movement of the musician. This movement is related to both the musical score execution and the emotional intention conveyed. In this experiment, a pianist was asked to play the same excerpt with different emotionally expressive intentions. The aim was to verify whether different expressions could be distinguished based on movement by trying to determine which motion cues were most emotion-sensitive. Analyses were performed via an automated system capable of detecting the temporal profiles of two motion cues: the quantity of motion of the upper body and the velocity of head movements. Results showed that both were sensitive to emotional expression, especially the velocity of head movements. Further, some features conveying information about movement temporal dynamics varied among expressive conditions allowing emotion discrimination. These results are in line with recent theories that underlie the dynamic nature of emotional expression.


2021 ◽  
Vol 23 (3) ◽  
pp. 6-19
Author(s):  
Dmitrii Ardashev ◽  
◽  
Aleksander Zhukov ◽  

Introduction. To assess the current state of the technological system (TS) during grinding, it is preferable to use indirect criteria. Such approaches, in contrast to direct measurement methods, can be carried out without interrupting the production process. The main parameters used in the indirect assessment of the state of the cutting tool are the state of the workpiece (before and after processing), thermal and electrical characteristics of the cutting zone, vibroacoustic vibrations of the process, and force measurements. The work is devoted to the study of the acoustic parameters of grinding as a sufficiently informative and least resource-intensive characteristic. The relevance of the development of methods for assessing the state of the vehicle based on sound and topographic characteristics has many aspects, the main of which are applicability in grinding control, predicting the state of the cutting tool and planning the operations of the technological process. The aim of the work is to develop a mathematical model of the dependence of the vibroacoustic parameters of the external circular plunge-cut grinding process on the macro-roughness of the polished sample. The development of such a model is a necessary step in the design of a methodology for predicting the state of a tool. Accordingly, the subject of work is presented by two parameters simultaneously – the sound level arising in the process of grinding and the deviation of the surface shape of the ground images from cylindricality. The research methods used to achieve the designated aim were following: an experiment to study the sound phenomena accompanying round external plunge-cut grinding; measurement of macro-roughness of the surface of the samples, subjected to processing, using a coordinate measuring machine; correlation and regression analysis to obtain mathematical dependencies. Results and discussion. Two particular multiple linear regression models are obtained that describe the effect of the infeed rate and the operating time of the grinding wheel on the sound level during grinding and on deviations from the cylindricality of the processed samples. On the basis of particulars, a general model is developed that establishes the relationship between the sound characteristic and the macro-roughness index of the treated surface. It is shown that the sound characteristics (for example, the sound level) can be used as an indirect indicator of the current state of the vehicle, which makes it possible to assess the level of vibrations and, accordingly, to predict the quality of products.


2012 ◽  
Vol 26 (5) ◽  
pp. 675.e5-675.e11 ◽  
Author(s):  
Marco A. Guzman ◽  
Jayme Dowdall ◽  
Adam D. Rubin ◽  
Ahmed Maki ◽  
Samuel Levin ◽  
...  

Author(s):  
Juan M. Barrigón Morillas ◽  
Valentín Gómez Escobar ◽  
Guillermo Rey Gozalo ◽  
Rosendo Vílchez-Gómez ◽  
Juan Antonio Méndez Sierra ◽  
...  

Different urban environments were analyzed acoustically using recordings obtained with binaural techniques of recording and reproduction. Measurements were carried out in different urban locations in Spain that possess a range of acoustical characteristics. The perception of pleasantness/unpleasantness as described by the inhabitants of these urban environments was examined in terms of its relationship with two psychoacoustic parameters (loudness and sharpness) as well as the traditional measures of equivalent sound level (dB and dBA). These relationships were analyzed using both a verbal and a numerical scale. Highly significant correlations were found between the perception of an environment and the acoustical parameters characterizing that environment.


2021 ◽  
Vol 12 ◽  
Author(s):  
Robert R. McCrae

Some accounts of the evolution of music suggest that it emerged from emotionally expressive vocalizations and serves as a necessary counterweight to the cognitive elaboration of language. Thus, emotional expression appears to be intrinsic to the creation and perception of music, and music ought to serve as a model for affect itself. Because music exists as patterns of changes in sound over time, affect should also be seen in patterns of changing feelings. Psychologists have given relatively little attention to these patterns. Results from statistical approaches to the analysis of affect dynamics have so far been modest. Two of the most significant treatments of temporal patterns in affect—sentics and vitality affects have remained outside mainstream emotion research. Analysis of musical structure suggests three phenomena relevant to the temporal form of emotion: affect contours, volitional affects, and affect transitions. I discuss some implications for research on affect and for exploring the evolutionary origins of music and emotions.


1994 ◽  
Vol 108 (4) ◽  
pp. 325-328 ◽  
Author(s):  
F. Debruyne ◽  
P. Delaere ◽  
J. Wouters ◽  
P. Uwents

AbstractIn order to evaluate the vocal quality of tracheo-oesophageal and oesophageal speech, several objective acoustic parameters were measured in the acoustic waveform (fundamental frequency, waveform perturbation) and in the frequency spectrum (harmonic prominence, spectral slope). Twelve patients using tracheo-oesophageal speech (with the Provox® valve) and 12 patients using oesophageal speech for at least two months, participated.The main results were that tracheo-oesophageal voices more often showed a detectable fundamental frequency, and that this fundamental frequency was fairly stable; there was also a tendency to more clearly defined harmonics in tracheo-oesophageal speech. This suggests a more regular vibratory pattern in the pharyngo-oesophageal segment, due to the more efficient respiratory drive in tracheo-oesophageal speech. So, a better quality of the voice can be expected, in addition to the longer phonation time and higher maximal intensity.


2017 ◽  
Vol 284 (1859) ◽  
pp. 20170990 ◽  
Author(s):  
Piera Filippi ◽  
Jenna V. Congdon ◽  
John Hoang ◽  
Daniel L. Bowling ◽  
Stephan A. Reber ◽  
...  

Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.


Author(s):  
Mikhail Nikolaevich Pokusaev ◽  
Konstantin Evgenievich Khmelnitsky

The article deals with the results of experiments on measuring the Hangkai 4.0 outboard motor when using various types of noise-insulating hoods. The tests were carried out in accordance with GOST ISO 14509-1-2015 “Small Vessels. Noise measurement of small motor pleasure craft. Part 1. Noise of a passing ship” at full speed with the engine speed 4500 rev/min and motor power 4 HP. Measurements were carried out when the vessel was at a distance of 25 metres both sides with different options covering outboard motor. The average noise level and its frequency spectrum were measured, and the effectiveness of various types of hoods was evaluated. In the course of the experiment there was used a standard plastic hood of the Hangkai 4.0 engine, a noise-insulating hood (plastic hood, glued inside with automotive foil noise insulation) and an author's combined noise-insulating hood Kaponistr. Description and structural elements of Kaponistr are presented; it has been stated that the hood design was patented as a useful model in 2019. In the result of conducted experiments it has been inferred that the external noise level of the Hangkai 4.0 outboard motor (without hood) does not exceed 74.3 dBA, but is at the level of the permissible value of 75 dBA, so, when operating, the motor needs a standard hood. The prevailing frequency range of an outboard boat motor is within 300 - 2500 Hz. It has been inferred that each type of nosing (standard, sound-proof, combined, without hood) reduces the noise level of the outboard motor. The greatest effect of reducing external noise is observed when using a combined hood Kaponistr at a frequency of 800 Hz by 19.4 dBA or by 27%. In the research there were used the control and measuring devices (sound level meter, vibrometer, spectrum analyzer Ekofizika-110 (white); acoustic calibrator AK-1000) and software (Signal+3G Light manufactured by PKF Digital Instruments, LLC).


2021 ◽  
Vol 12 ◽  
Author(s):  
Laura Bishop ◽  
Alexander Refsum Jensenius ◽  
Bruno Laeng

Music performance can be cognitively and physically demanding. These demands vary across the course of a performance as the content of the music changes. More demanding passages require performers to focus their attention more intensity, or expend greater “mental effort.” To date, it remains unclear what effect different cognitive-motor demands have on performers' mental effort. It is likewise unclear how fluctuations in mental effort compare between performers and perceivers of the same music. We used pupillometry to examine the effects of different cognitive-motor demands on the mental effort used by performers and perceivers of classical string quartet music. We collected pupillometry, motion capture, and audio-video recordings of a string quartet as they performed a rehearsal and concert (for live audience) in our lab. We then collected pupillometry data from a remote sample of musically-trained listeners, who heard the audio recordings (without video) that we captured during the concert. We used a modelling approach to assess the effects of performers' bodily effort (head and arm motion; sound level; performers' ratings of technical difficulty), musical complexity (performers' ratings of harmonic complexity; a score-based measure of harmonic tension), and expressive difficulty (performers' ratings of expressive difficulty) on performers' and listeners' pupil diameters. Our results show stimulating effects of bodily effort and expressive difficulty on performers' pupil diameters, and stimulating effects of expressive difficulty on listeners' pupil diameters. We also observed negative effects of musical complexity on both performers and listeners, and negative effects of performers' bodily effort on listeners, which we suggest may reflect the complex relationships that these features share with other aspects of musical structure. Looking across the concert, we found that both of the quartet violinists (who exchanged places halfway through the concert) showed more dilated pupils during their turns as 1st violinist than when playing as 2nd violinist, suggesting that they experienced greater arousal when “leading” the quartet in the 1st violin role. This study shows how eye tracking and motion capture technologies can be used in combination in an ecological setting to investigate cognitive processing in music performance.


Sign in / Sign up

Export Citation Format

Share Document