musical excerpts
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 39)

H-INDEX

20
(FIVE YEARS 1)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261151
Author(s):  
Jonna K. Vuoskoski ◽  
Janis H. Zickfeld ◽  
Vinoo Alluri ◽  
Vishnu Moorthigari ◽  
Beate Seibt

The experience often described as feeling moved, understood chiefly as a social-relational emotion with social bonding functions, has gained significant research interest in recent years. Although listening to music often evokes what people describe as feeling moved, very little is known about the appraisals or musical features contributing to the experience. In the present study, we investigated experiences of feeling moved in response to music using a continuous rating paradigm. A total of 415 US participants completed an online experiment where they listened to seven moving musical excerpts and rated their experience while listening. Each excerpt was randomly coupled with one of seven rating scales (perceived sadness, perceived joy, feeling moved or touched, sense of connection, perceived beauty, warmth [in the chest], or chills) for each participant. The results revealed that musically evoked experiences of feeling moved are associated with a similar pattern of appraisals, physiological sensations, and trait correlations as feeling moved by videos depicting social scenarios (found in previous studies). Feeling moved or touched by both sadly and joyfully moving music was associated with experiencing a sense of connection and perceiving joy in the music, while perceived sadness was associated with feeling moved or touched only in the case of sadly moving music. Acoustic features related to arousal contributed to feeling moved only in the case of joyfully moving music. Finally, trait empathic concern was positively associated with feeling moved or touched by music. These findings support the role of social cognitive and empathic processes in music listening, and highlight the social-relational aspects of feeling moved or touched by music.


2021 ◽  
Author(s):  
Gladys Jiamin Heng ◽  
Quek Hiok Chai ◽  
SH Annabel Chen

Learning mechanisms have been postulated to be one of the primary reasons why different individuals have similar or different emotional responses to music. While existing studies have largely examined mechanisms related to learning in terms of cultural familiarity or recognition, few studies have conceptualized it in terms of an individual’s level of familiarity with musical style, which could be a better reflection of an individual’s composite musical experiences. Therefore, the current study aimed to bridge this research gap by investigating the electrophysiological correlates of the effects of familiarity with musical style on music-evoked emotions. 49 non-musicians listened to 12 musical excerpts of a familiar musical style (Japanese animation soundtracks) and eight musical excerpts of an unfamiliar musical style (Greek Laïkó music) with their eyes closed as electroencephalography is being recorded. Participants rated their felt emotions after each musical excerpt is played. Behavioral ratings showed that music of the familiar musical style was felt as significantly more pleasant as compared to the unfamiliar musical style while no significant differences in arousal were observed. In terms of brain activity, music of the unfamiliar musical style elicited higher (1) theta power in all brain regions (including frontal midline), (2) alpha power in frontal region, and (3) beta power in fronto-temporo-occipital regions as compared to the familiar musical style. This is interpreted to reflect the need for greater attentional resources when listening to music of an unfamiliar style, where listeners are less familiar with the syntax and structure of the music as compared to music of a familiar style. In addition, classification analysis showed that unfamiliar and familiar musical styles can be distinguished with 67.86% accuracy, Thus, clinicians should consider the musical profile of the client when choosing an appropriate selection of music in the treatment plan, so as to achieve better efficacy.


2021 ◽  
pp. 027623742110594
Author(s):  
Diana Omigie ◽  
Jessica Ricci

Music offers a useful opportunity to consider the factors contributing to the experience of curiosity in the context of dynamically changing stimuli. Here, we tested the hypothesis that the perception of change in music triggers curiosity as to how the heard music will unfold. Participants were presented with unfamiliar musical excerpts and asked to provide continuous ratings of their subjective experience of curiosity and calm, and their perception of change, as the music unfolded. As hypothesized, we found that for all musical pieces, the perceptual experience of change Granger-caused feelings of curiosity but not feelings of calm. Our results suggest music is a powerful tool with which to examine the factors contributing to curiosity induction. Accordingly, we outline ways in which extensions to the approach taken here may be useful: both in elucidating our information-seeking drive more generally, and in elucidating the manifestation of this drive during music listening.


2021 ◽  
pp. 030573562110506
Author(s):  
Clémence Nineuil ◽  
Delphine Dellacherie ◽  
Séverine Samson

The aim of this study was to obtain French affective norms for the film music stimulus set (FMSS). This data set consists of a relatively homogeneous series of musical stimuli made up of film music excerpts, known to trigger strong emotion. The 97 musical excerpts were judged by 194 native French participants using a simplified normative procedure in order to assess valence and arousal judgments. This normalization will (1) provide researchers with standardized rated affective music to be used with a French population, (2) enable the investigation of individual listeners’ differing emotional judgments, and (3) explore how cultural differences affect the ratings of musical stimuli. Our results, in line with those obtained in Finland and Spain, demonstrated the FMSS to be robust and interculturally valid within Western Europe. Age, sex, education, and musical training were not found to have any effects on emotional judgments. In conclusion, this study provides the scientific community with a standardized-stimulus set of musical excerpts whose emotional valence and arousal have been validated by a sampling of the French population.


2021 ◽  
Vol 39 (2) ◽  
pp. 145-159
Author(s):  
Laure-Hélène Canette ◽  
Philippe Lalitte ◽  
Barbara Tillmann ◽  
Emmanuel Bigand

Conceptual priming studies have shown that listening to musical primes triggers semantic activation. The present study further investigated with a free semantic evocation task, 1) how rhythmic vs. textural structures affect the amount of words evoked after a musical sequence, and 2) whether both features also affect the content of the semantic activation. Rhythmic sequences were composed of various percussion sounds with a strong underlying beat and metrical structure. Textural sound sequences consisted of blended timbres and sound sources evolving over time without identifiable pulse. Participants were asked to verbalize the concepts evoked by the musical sequences. We measured the number of words and lemmas produced after having listened to musical sequences of each condition, and we analyzed whether specific concepts were associated with each sequence type. Results showed that more words and lemmas were produced for textural sound sequences than for rhythmic sequences and that some concepts were specifically associated with each musical condition. Our findings suggest that listening to musical excerpts emphasizing different features influences semantic activation in different ways and extent. This might possibly be instantiated via cognitive mechanisms triggered by the acoustic characteristics of the excerpts as well as the perceived emotions.


2021 ◽  
pp. 1-14
Author(s):  
Joël Macoir ◽  
Marie-Pier Tremblay ◽  
Maximiliano A. Wilson ◽  
Robert Laforce ◽  
Carol Hudon

Background: The role of semantic knowledge in emotion recognition remains poorly understood. The semantic variant of primary progressive aphasia (svPPA) is a degenerative disorder characterized by progressive loss of semantic knowledge, while other cognitive abilities remain spared, at least in the early stages of the disease. The syndrome is therefore a reliable clinical model of semantic impairment allowing for testing the propositions made in theoretical models of emotion recognition. Objective: The main goal of this study was to investigate the role of semantic memory in the recognition of basic emotions conveyed by music in individuals with svPPA. Methods: The performance of 9 individuals with svPPA was compared to that of 32 control participants in tasks designed to investigate the ability: a) to differentiate between familiar and non-familiar musical excerpts, b) to associate semantic concepts to musical excerpts, and c) to recognize basic emotions conveyed by music. Results: Results revealed that individuals with svPPA showed preserved abilities to recognize familiar musical excerpts but impaired performance on the two other tasks. Moreover, recognition of basic emotions and association of musical excerpts with semantic concepts was significantly better for familiar than non-familiar musical excerpts in participants with svPPA. Conclusion: Results of this study have important implications for theoretical models of emotion recognition and music processing. They suggest that impairment of semantic memory in svPPA affects both the activation of emotions and factual knowledge from music and that this impairment is modulated by familiarity with musical tunes.


2021 ◽  
Vol 5 (11) ◽  
pp. 68
Author(s):  
Alessandro Ansani ◽  
Marco Marini ◽  
Luca Mallia ◽  
Isabella Poggi

One of the most tangible effects of music is its ability to alter our perception of time. Research on waiting times and time estimation of musical excerpts has attested its veritable effects. Nevertheless, there exist contrasting results regarding several musical features’ influence on time perception. When considering emotional valence and arousal, there is some evidence that positive affect music fosters time underestimation, whereas negative affect music leads to overestimation. Instead, contrasting results exist with regard to arousal. Furthermore, to the best of our knowledge, a systematic investigation has not yet been conducted within the audiovisual domain, wherein music might improve the interaction between the user and the audiovisual media by shaping the recipients’ time perception. Through the current between-subjects online experiment (n = 565), we sought to analyze the influence that four soundtracks (happy, relaxing, sad, scary), differing in valence and arousal, exerted on the time estimation of a short movie, as compared to a no-music condition. The results reveal that (1) the mere presence of music led to time overestimation as opposed to the absence of music, (2) the soundtracks that were perceived as more arousing (i.e., happy and scary) led to time overestimation. The findings are discussed in terms of psychological and phenomenological models of time perception.


2021 ◽  
Vol 5 (03) ◽  
pp. E81-E90
Author(s):  
Analina Emmanouil ◽  
Elissavet Rousanoglou ◽  
Anastasia Georgaki ◽  
Konstantinos D. Boudolos

AbstractA musical accompaniment is often used in movement coordination and stability exercise modalities, although considered obstructive for their fundament of preferred movement pace. This study examined if the rhythmic strength of musical excerpts used in movement coordination and exercise modalities allows the preferred spatio-temporal pattern of movement. Voluntary and spontaneous body sway (70 s) were tested (N=20 young women) in a non-musical (preferred) and two rhythmic strength (RS) musical conditions (Higher:HrRS, Lower:LrRS). The center of pressure trajectory was used for the body sway spatio-temporal characteristics (Kistler forceplate, 100 Hz). Statistics included paired t-tests between each musical condition and the non-musical one, as well as between musical conditions (p≤0.05). Results indicated no significant difference between the musical and the non-musical conditions (p>0.05). The HrRS differed significantly from LrRS only in the voluntary body sway, with increased sway duration (p=0.03), center of pressure path (p=0.04) and velocity (p=0.01). The findings provide evidence-based support for the rhythmic strength recommendations in movement coordination and stability exercise modalities. The HrRS to LrRS differences in voluntary body sway most possibly indicate that low-frequency musical features rather than just tempo and pulse clarity are also important.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xin Wang ◽  
Yujia Wei ◽  
Lena Heng ◽  
Stephen McAdams

Timbre is one of the psychophysical cues that has a great impact on affect perception, although, it has not been the subject of much cross-cultural research. Our aim is to investigate the influence of timbre on the perception of affect conveyed by Western and Chinese classical music using a cross-cultural approach. Four listener groups (Western musicians, Western nonmusicians, Chinese musicians, and Chinese nonmusicians; 40 per group) were presented with 48 musical excerpts, which included two musical excerpts (one piece of Chinese and one piece of Western classical music) per affect quadrant from the valence-arousal space, representing angry, happy, peaceful, and sad emotions and played with six different instruments (erhu, dizi, pipa, violin, flute, and guitar). Participants reported ratings of valence, tension arousal, energy arousal, preference, and familiarity on continuous scales ranging from 1 to 9. ANOVA reveals that participants’ cultural backgrounds have a greater impact on affect perception than their musical backgrounds, and musicians more clearly distinguish between a perceived measure (valence) and a felt measure (preference) than do nonmusicians. We applied linear partial least squares regression to explore the relation between affect perception and acoustic features. The results show that the important acoustic features for valence and energy arousal are similar, which are related mostly to spectral variation, the shape of the temporal envelope, and the dynamic range. The important acoustic features for tension arousal describe the shape of the spectral envelope, noisiness, and the shape of the temporal envelope. The explanation for the similarity of perceived affect ratings between instruments is the similar acoustic features that were caused by the physical characteristics of specific instruments and performing techniques.


2021 ◽  
Vol 11 (19) ◽  
pp. 8833
Author(s):  
Alfredo Raglio ◽  
Paola Baiardi ◽  
Giuseppe Vizzari ◽  
Marcello Imbriani ◽  
Mauro Castelli ◽  
...  

This study assessed the short-term effects of conventional (i.e., human-composed) and algorithmic music on the relaxation level. It also investigated whether algorithmic compositions are perceived as music and are distinguishable from human-composed music. Three hundred twenty healthy volunteers were recruited and randomly allocated to two groups where they listened to either their preferred music or algorithmic music. Another 179 healthy subjects were allocated to four listening groups that respectively listened to: music composed and performed by a human, music composed by a human and performed by a machine; music composed by a machine and performed by a human, music composed and performed by a machine. In the first experiment, participants underwent one of the two music listening conditions—preferred or algorithmic music—in a comfortable state. In the second one, participants were asked to evaluate, through an online questionnaire, the musical excerpts they listened to. The Visual Analogue Scale was used to evaluate their relaxation levels before and after the music listening experience. Other outcomes were evaluated through the responses to the questionnaire. The relaxation level obtained with the music created by the algorithms is comparable to the one achieved with preferred music. Statistical analysis shows that the relaxation level is not affected by the composer, the performer, or the existence of musical training. On the other hand, the perceived effect is related to the performer. Finally, music composed by an algorithm and performed by a human is not distinguishable from that composed by a human.


Sign in / Sign up

Export Citation Format

Share Document