musical tempo
Recently Published Documents


TOTAL DOCUMENTS

53
(FIVE YEARS 12)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
◽  
Kameron Christopher

<p>In this thesis I develop a robust system and method for predicting individuals’ emotional responses to musical stimuli. Music has a powerful effect on human emotion, however the factors that create this emotional experience are poorly understood. Some of these factors are characteristics of the music itself, for example musical tempo, mode, harmony, and timbre are known to affect people's emotional responses. However, the same piece of music can produce different emotional responses in different people, so the ability to use music to induce emotion also depends on predicting the effect of individual differences. These individual differences might include factors such as people's moods, personalities, culture, and musical background amongst others. While many of the factors that contribute to emotional experience have been examined, it is understood that the research in this domain is far from both a) identifying and understanding the many factors that affect an individual’s emotional response to music, and b) using this understanding of factors to inform the selection of stimuli for emotion induction. This unfortunately results in wide variance in emotion induction results, inability to replicate emotional studies, and the inability to control for variables in research.  The approach of this thesis is to therefore model the latent variable contributions to an individual’s emotional experience of music through the application of deep learning and modern recommender system techniques. With each study in this work, I iteratively develop a more reliable and effective system for predicting personalised emotion responses to music, while simultaneously adopting and developing strong and standardised methodology for stimulus selection. The work sees the introduction and validation of a) electronic and loop-based music as reliable stimuli for inducing emotional responses, b) modern recommender systems and deep learning as methods of more reliably predicting individuals' emotion responses, and c) novel understandings of how musical features map to individuals' emotional responses.  The culmination of this research is the development of a personalised emotion prediction system that can better predict individuals emotional responses to music, and can select musical stimuli that are better catered to individual difference. This will allow researchers and practitioners to both more reliably and effectively a) select music stimuli for emotion induction, and b) induce and manipulate target emotional responses in individuals.</p>


2021 ◽  
Author(s):  
◽  
Kameron Christopher

<p>In this thesis I develop a robust system and method for predicting individuals’ emotional responses to musical stimuli. Music has a powerful effect on human emotion, however the factors that create this emotional experience are poorly understood. Some of these factors are characteristics of the music itself, for example musical tempo, mode, harmony, and timbre are known to affect people's emotional responses. However, the same piece of music can produce different emotional responses in different people, so the ability to use music to induce emotion also depends on predicting the effect of individual differences. These individual differences might include factors such as people's moods, personalities, culture, and musical background amongst others. While many of the factors that contribute to emotional experience have been examined, it is understood that the research in this domain is far from both a) identifying and understanding the many factors that affect an individual’s emotional response to music, and b) using this understanding of factors to inform the selection of stimuli for emotion induction. This unfortunately results in wide variance in emotion induction results, inability to replicate emotional studies, and the inability to control for variables in research.  The approach of this thesis is to therefore model the latent variable contributions to an individual’s emotional experience of music through the application of deep learning and modern recommender system techniques. With each study in this work, I iteratively develop a more reliable and effective system for predicting personalised emotion responses to music, while simultaneously adopting and developing strong and standardised methodology for stimulus selection. The work sees the introduction and validation of a) electronic and loop-based music as reliable stimuli for inducing emotional responses, b) modern recommender systems and deep learning as methods of more reliably predicting individuals' emotion responses, and c) novel understandings of how musical features map to individuals' emotional responses.  The culmination of this research is the development of a personalised emotion prediction system that can better predict individuals emotional responses to music, and can select musical stimuli that are better catered to individual difference. This will allow researchers and practitioners to both more reliably and effectively a) select music stimuli for emotion induction, and b) induce and manipulate target emotional responses in individuals.</p>


2021 ◽  
Author(s):  
Kristin Weineck ◽  
Olivia Xin Wen ◽  
Molly J. Henry

Neural activity in the auditory system synchronizes to sound rhythms, and brain environment synchronization is thought to be fundamental to successful auditory perception. Sound rhythms are often operationalized in terms of the sound's amplitude envelope. We hypothesized that, especially for music, the envelope might not best capture the complex spectrotemporal fluctuations that give rise to beat perception and synchronize neural activity. This study investigated 1) neural entrainment to different musical features, 2) tempo dependence of neural entrainment, and 3) dependence of entrainment on familiarity, enjoyment, and ease of beat perception. In this electroencephalography study, 37 human participants listened to tempo modulated music (1 to 4 Hz). Independent of whether the analysis approach was based on temporal response functions (TRFs) or reliable components analysis (RCA), the spectral flux of music, as opposed to the amplitude envelope, evoked strongest neural entrainment. Moreover, music with slower beat rates, high familiarity, and easy to perceive beats elicited the strongest neural response. Based on the TRFs, we could decode music stimulation tempo, but also perceived beat rate, even when the two differed. Our results demonstrate the importance of accurately characterizing musical acoustics in the context of studying neural entrainment, and demonstrate the sensitivity of entrainment to musical tempo, familiarity, and beat salience.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10658
Author(s):  
Xinhong Jin ◽  
Yingzhi Lu ◽  
Bradley D. Hatfield ◽  
Xiaoyu Wang ◽  
Biye Wang ◽  
...  

Background Although the association of human temperament and preference has been studied previously, few investigations have examined cerebral cortical activation to assess brain dynamics associated with the motivation to engage in performance. The present study adopted a personality and cognitive neuroscience approach to investigate if participation in ballroom dancing is associated with sensation-seeking temperament and elevated cerebral cortical arousal during freely chosen musical recall. Methods Preferred tempo, indicated by tapping speed during melodic recall, and a measure of fundamental disposition or temperament were assessed in 70 ballroom dancers and 71 nondancers. All participants completed a trait personality inventory (i.e., the Chen Huichang 60 Temperaments Inventory) to determine four primary types: choleric, sanguine, phlegmatic and melancholic. Participants separately recalled their favorite musical piece and tapped to it with their index finger for 40 beats using a computer keyboard. A subset of 59 participants (29 ballroom dancers and 30 nondancers) also repeated the same tapping task while electroencephalographic (EEG) activity was recorded. Results The results revealed that the dancers were more extraverted, indicative of a heightened need for arousal, exhibited a preference for faster musical tempo, and exhibited elevated EEG beta power during the musical recall task relative to nondancers. Paradoxically, dancers also showed elevated introversion (i.e., melancholic score) relative to nondancers, which can be resolved by consideration of interactional personality theory if one assumes reasonably that dance performance environment is perceived in a stimulating manner. Conclusion The results are generally consistent with arousal theory, and suggest that ballroom dancers seek elevated stimulation and, thereby, choose to engage with active and energetic rhythmic auditory stimulation, thus providing the nervous system with the requisite stimulation for desired arousal. These results also suggest an underlying predisposition for engagement in ballroom dance and support the gravitational hypothesis, which propose that personality traits and perception lead to the motivation to engage in specific forms of human performance.


2021 ◽  
Vol 4 ◽  
pp. 205920432098638
Author(s):  
David Hammerschmidt ◽  
Clemens Wöllner ◽  
Justin London ◽  
Birgitta Burger

Our perception of the duration of a piece of music is related to its tempo. When listening to music, absolute durations may seem longer as the tempo—the rate of an underlying pulse or beat—increases. Yet, the perception of tempo itself is not absolute. In a study on perceived tempo, participants were able to distinguish between different tempo-shifted versions of the same song (± 5 beats per minute (BPM)), yet their tempo ratings did not match the actual BPM rates; this finding was called tempo anchoring effect (TAE). In order to gain further insights into the relation between duration and tempo perception in music, the present study investigated the effect of musical tempo on two different duration measures, to see if there is an analog to the TAE in duration perception. Using a repeated-measures design, 32 participants (16 musicians) were randomly presented with instrumental excerpts of Disco songs at the original tempi and in tempo-shifted versions. The tasks were (a) to reproduce the absolute duration of each stimulus (14–20 s), (b) to estimate the absolute duration of the stimuli in seconds, and (c) to rate the perceived tempo. Results show that duration reproductions were longer with faster tempi, yet no such effect was found for duration estimations. Thus, lower-level reproductions were affected by the tempo, but higher-level estimations were not. The tempo-shifted versions showed no effect on both duration measures, suggesting that the tempo difference for the duration-lengthening effect requires a difference of at least 20 BPM, depending on the duration measure. Results of perceived tempo replicated the typical rating pattern of the TAE, but this was not found in duration measures. The roles of spontaneous motor tempo and musical experience are discussed, and implications for future studies are given.


2020 ◽  
Author(s):  
Iran R Roman ◽  
Adrian S Roman ◽  
Edward W. Large

3.1AbstractMusic has a tempo (or frequency of the underlying beat) that musicians maintain throughout a performance. Musicians maintain this musical tempo on their own or paced by a metronome. Behavioral studies have found that each musician shows a spontaneous rate of movement, called spontaneous motor tempo (SMT), which can be measured when a musician spontaneously plays a simple melody. Data shows that a musician’s SMT systematically influences how actions align with the musical tempo. In this study we present a model that captures this phenomenon. To develop our model, we review the results from three musical performance settings that have been previously published: (1) solo musical performance with a pacing metronome tempo that is different from the SMT, (2) solo musical performance without a metronome at a spontaneous tempo that is faster or slower than the SMT, and (3) duet musical performance between musician pairs with matching and mismatching SMTs. In the first setting, the asynchrony between the pacing metronome and the musician’s tempo grew as a function of the difference between the metronome tempo and the musician’s SMT. In the second setting, musicians drifted away from the initial spontaneous tempo toward the SMT. And in the third setting, the absolute asynchronies between performing musicians were smaller if their SMTs matched compared to when they did not. Based on these previous observations, we hypothesize that, while musicians can perform musical actions at a tempo different from their SMT, the SMT constantly acts as a pulling force. We developed a model to test our hypothesis. The model is an oscillatory dynamical system with Hebbian and elastic tempo learning that simulates music performance. We simulate an individual’s SMT with the dynamical system’s natural frequency. Hebbian learning lets the system’s frequency adapt to match the stimulus frequency. The pulling force is simulated with an elasticity term that pulls the learned frequency toward the system’s natural frequency. We used this model to simulate the three music performance settings, replicating behavioral results. Our model also lets us make predictions of musician’s performance not yet tested. The present study offers a dynamical explanation of how an individual’s SMT affects adaptive synchronization in realistic musical performance.


Author(s):  
Hufschmitt Aline ◽  
Cardon Stephane ◽  
Jacopin Eric
Keyword(s):  

Author(s):  
Qi Meng ◽  
Jiani Jiang ◽  
Fangfang Liu ◽  
Xiaoduo Xu

The acoustic environment is one of the factors influencing emotion, however, existing research has mainly focused on the effects of noise on emotion, and on music therapy, while the acoustic and psychological effects of music on interactive behaviour have been neglected. Therefore, this study aimed to investigate the effects of music on communicating emotion including evaluation of music, and d-values of pleasure, arousal, and dominance (PAD), in terms of sound pressure level (SPL), musical emotion, and tempo. Based on acoustic environment measurement and a questionnaire survey with 52 participants in a normal classroom in Harbin city, China, the following results were found. First, SPL was significantly correlated with musical evaluation of communication: average scores of musical evaluation decreased sharply from 1.31 to −2.13 when SPL rose from 50 dBA to 60 dBA, while they floated from 0.88 to 1.31 between 40 dBA and 50 dBA. Arousal increased with increases in musical SPL in the negative evaluation group. Second, musical emotions had significant effects on musical evaluation of communication, among which the effect of joyful-sounding music was the highest; and in general, joyful- and stirring-sounding music could enhance pleasure and arousal efficiently. Third, musical tempo had significant effect on musical evaluation and communicating emotion, faster music could enhance arousal and pleasure efficiently. Finally, in terms of social characteristics, familiarity, gender combination, and number of participants affected communicating emotion. For instance, in the positive evaluation group, dominance was much higher in the single-gender groups. This study shows that some music factors, such as SPL, musical emotion, and tempo, can be used to enhance communicating emotion.


2020 ◽  
pp. 030573562090477
Author(s):  
Jorge A Aburto-Corona ◽  
J A de Paz ◽  
José Moncada-Jiménez ◽  
Bryan Montero-Herrera ◽  
Luis M Gómez-Miranda

The purpose of this study was to determine the effect of the musical tempo on heart rate (HR), rating of perceived exertion (RPE), and distance run (DR) during a treadmill aerobic test in young male and female adults. Participants ran on the treadmill listening to music at 140 beats per minute (bpm; M140), 120 bpm (M120), or without music (NM). No significant sex differences were found on HR (M140 = 172.6 ± 12.7, M120 = 171.9 ± 11.1, NM = 170.1 ± 12.2 bpm, p = .312), RPE (M140 = 7.5 ± 1.4, M120 = 7.6 ± 1.3, NM = 7.6 ± 1.2, p = .931), or DR (M140 = 4,791.4 ± 2,681.1, M120 = 4,900.0 ± 2,916.9, NM = 4,356.1 ± 2,571.2 m, p = .715). Differences were found in the effect of tempo on HR between condition M140 and NM (172.6 ± 12.7 vs. 170.1 ± 12.2 bpm, p = .044, η2 = 0.32). In conclusion, musical tempo does not affect performance, physiological, or perceptual variables in young adults exercising on a treadmill at a constant speed.


Sign in / Sign up

Export Citation Format

Share Document