musical excerpt
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 39 (1) ◽  
pp. 63-82
Author(s):  
Jason Stoessel ◽  
Kristal Spreadborough ◽  
Inés Antón-Méndez

Historical listening has long been a topic of interest for musicologists. Yet, little attention has been given to the systematic study of historical listening practices before the common practice era (c. 1700–present). In the first study of its kind, this research compared a model of medieval perceptions of “sweetness” based on writings of medieval music theorists with modern day listeners’ aesthetic responses. Responses were collected through two experiments. In an implicit associations experiment, participants were primed with a more or less consonant musical excerpt, then presented with a sweet or bitter target word, or a non-word, on which to make lexical decisions. In the explicit associations experiment, participants were asked to rate on a three-point Likert scale perceived sweetness of short musical excerpts that varied in consonance and sound quality (male, female, organ). The results from these experiments were compared to predictions from a medieval perception model to investigate whether early and modern listeners have similar aesthetic responses. Results from the implicit association test were not consistent with the predictions of the model, however, results from the explicit associations experiment were. These findings indicate the metaphor of sweetness may be useful for comparing the aesthetic responses of medieval and modern listeners.


2021 ◽  
Author(s):  
Karli M Nave ◽  
Erin Hannon ◽  
Joel S. Snyder

Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research shows that the frequency of the beat can be observed in the neural activity of the listener. However, the extent to which these neural responses reflect concurrent, conscious perception of musical beat versus stimulus-driven activity is a matter of debate. We investigated whether this kind of periodic brain activity, measured using electroencephalography (EEG), reflects perception of beat, by holding the stimulus constant while manipulating the listener’s perception. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies corresponding to the beat of the preceding context, and the EEG amplitude at the beat-related frequency predicted performance on the beat induction task on a single-trial basis. Together, these findings provide evidence that auditory cortical activity reflects conscious perception of musical beat and not just stimulus features or effortful attention.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2016 ◽  
Author(s):  
Aleksandra Dorochowicz ◽  
Adam Kurowski ◽  
Bożena Kostek

The purpose of this research is two-fold: (a) to explore the relationship between the listeners’ personality trait, i.e., extraverts and introverts and their preferred music genres, and (b) to predict the personality trait of potential listeners on the basis of a musical excerpt by employing several classification algorithms. We assume that this may help match songs according to the listener’s personality in social music networks. First, an Internet survey was built, in which the respondents identify themselves as extraverts or introverts according to the given definitions. Their task was to listen to music excerpts that belong to several music genres and choose the ones they like. Next, music samples were parameterized. Two parametrization schemes were employed for that purpose, i.e., low-level MIRtoolbox parameters (MIRTbx) and variational autoencoder neural network-based, which automatically extract parameters of musical excerpts. The prediction of a personality type was performed employing four baseline algorithms, i.e., support vector machine (SVM), k-nearest neighbors (k-NN), random forest (RF), and naïve Bayes (NB). The best results were obtained by the SVM classifier. The results of these analyses led to the conclusion that musical excerpt features derived from the autoencoder were, in general, more likely to carry useful information associated with the personality of the listeners than the low-level parameters derived from the signal analysis. We also found that training of the autoencoders on sets of musical pieces which contain genres other than ones employed in the subjective tests did not affect the accuracy of the classifiers predicting the personalities of the survey participants.


2018 ◽  
Vol 35 (4) ◽  
pp. 437-453 ◽  
Author(s):  
Madeline Huberth ◽  
Takako Fujioka

Some movements that musicians make are non-essential to their instrumental playing, yet express their intentions and interpretations of music’s structures. One such potential interpretation is the choice to emphasize short melodic groupings or to integrate these groupings into a phrase. This study aimed to characterize the nature of head motions associated with either interpretation by having cellists play two versions of a musical excerpt: 1) with short groupings specified, and 2) with long groupings specified. Cellists were filmed by two video cameras (front and right-side perspective) and the positions of their forehead and cheek were analyzed in their respective two-dimensional spaces. We hypothesized that the amount and frequency of movements would change according to the intended grouping. The results show that, overall, participants’ heads move more frequently when intending short groupings compared to long groupings. However, the extent of the change in motion varied across different sections of the excerpt. It appears that performers may invest more effort to emphasize the intended interpretation when a given local pitch structure more easily affords alternative interpretations. Our results illustrate that performers can embody melodic groupings based on intended interpretation.


2017 ◽  
Vol 41 (S1) ◽  
pp. s780-s780
Author(s):  
S. Paiva ◽  
F. Rezende ◽  
J. Moreira

Several studies indicate that music has soothing effects and is effective for reducing stress and anxiety in coronary patients. The effects of stress on the cardiovascular system have also been proven. However, the meanings assigned to music when used during hemodynamic procedures are unknown, as are the meanings of the experience of these procedures. The aim of this research is to understand the senses and feelings of music for patients undergoing hemodynamic procedures, identify and interpret the fantasies and emotions related to, and study the possibility of deploying in hospitals the “Musical Method for Hemodynamic Procedures”, being developed by the author. This research is based on a clinical-qualitative methodology. The sampling method is the theoretical saturation. The semi-structured interview was used in order to obtain data that was submitted to content analysis. The subjects are patients undergoing hemodynamic procedures in hospital SEMPER, Brazil. We conclude that within the experience of listening to music while undergoing catheterisation 100% of the patients claimed they had overcome the experience of stress and felt calm, tranquillity, peace and happiness. Some patients described the music as a companion, as something that diverts their attention from fear, transporting them to an imaginary place, to another dimension. The episodic memory, the capacity to recognize a musical excerpt for which the spatiotemporal context surrounding its former encounter can be recalled, was also important, with surprising results in the case of patients who underwent catheterisation in the presence of music and, later, angioplasty without the presence of music.Disclosure of interestThe authors have not supplied their declaration of competing interest.


2013 ◽  
Vol 27 (3) ◽  
pp. 142-148 ◽  
Author(s):  
Konstantinos Trochidis ◽  
Emmanuel Bigand

The combined interactions of mode and tempo on emotional responses to music were investigated using both self-reports and electroencephalogram (EEG) activity. A musical excerpt was performed in three different modes and tempi. Participants rated the emotional content of the resulting nine stimuli and their EEG activity was recorded. Musical modes influence the valence of emotion with major mode being evaluated happier and more serene, than minor and locrian modes. In EEG frontal activity, major mode was associated with an increased alpha activation in the left hemisphere compared to minor and locrian modes, which, in turn, induced increased activation in the right hemisphere. The tempo modulates the arousal value of emotion with faster tempi associated with stronger feeling of happiness and anger and this effect is associated in EEG with an increase of frontal activation in the left hemisphere. By contrast, slow tempo induced decreased frontal activation in the left hemisphere. Some interactive effects were found between mode and tempo: An increase of tempo modulated the emotion differently depending on the mode of the piece.


2010 ◽  
Vol 22 (8) ◽  
pp. 1754-1769 ◽  
Author(s):  
Jérôme Daltrozzo ◽  
Barbara Tillmann ◽  
Hervé Platel ◽  
Daniele Schön

We tested whether the emergence of familiarity to a melody may trigger or co-occur with the processing of the concept(s) conveyed by emotions to, or semantic association with, the melody. With this objective, we recorded ERPs while participants were presented with highly familiar and less familiar melodies in a gating paradigm. The ERPs time locked to a tone of the melody called the “familiarity emergence point” showed a larger fronto-central negativity for highly familiar compared with less familiar melodies between 200 and 500 msec, with a peak latency around 400 msec. This latency and the sensitivity to the degree of familiarity/conceptual information suggest that this component was an N400, a marker of conceptual processing. Our data suggest that the feeling of familiarity evoked by a musical excerpt could be accompanied by other processing mechanisms at the conceptual level. Coupling the gating paradigm with ERP analyses might become a new avenue for investigating the neurocognitive basis of implicit musical knowledge.


2006 ◽  
Vol 21 (1) ◽  
pp. 3-9
Author(s):  
B G Wristen ◽  
M C Jung ◽  
A K G Wismer ◽  
M S Hallbeck

This pilot study examined whether the use of a 7/8 keyboard contributed to the physical ease of small-handed pianists as compared with the conventional piano keyboard. A secondary research question focused on the progression of physical ease in pianists making the transition from one keyboard to the other. For the purposes of this study, a hand span of 8 inches or less was used to define a “small-handed” pianist. The goal was to measure muscle loading and hand span during performance of a specified musical excerpt. For data collection, each of the two participants was connected to an 8-channel electromyography system via surface electrodes, which were attached to the upper back/shoulder, parts of the hand and arm, and masseter muscle of the jaw. Subjects also were fitted with electrogoniometers to capture how the span from the first metacarpophalangeal (MCP) joint to the fifth MCP joint moves according to performance demands, as well as wrist flexion and extension and radial and ulnar deviation. We found that small-handed pianists preferred the smaller keyboard and were able to transition between it and the conventional keyboard. The maximal angle of hand span while playing a difficult piece was about 5º smaller radially and 10º smaller ulnarly for the 7/8 keyboard, leading to perceived ease and better performance as rated by the pianists.


2006 ◽  
Vol 21 (1) ◽  
pp. 10-16
Author(s):  
Brenda Wristen ◽  
Sharon Evans ◽  
Nicholas Stergiou

This study was intended to examine whether differences exist in the motions employed by pianists when they are sight-reading versus performing repertoire and to determine whether these differences can be quantified using high-speed motion capture technology. A secondary question of interest was whether or not an improvement in the efficiency of motion could be observed between two sight-reading trials of the same musical excerpt. This case study employed one subject and a six-camera digital infrared camera system to capture the motion of the pianist playing two trials of a repertoire piece and two trials of a sight-reading excerpt. Angular displacements and velocities were calculated for bilateral shoulder, elbow, wrist, and index finger joints. The findings demonstrate the usefulness of high-speed motion capture technology for analyzing motions of pianists during performance, showing that the subject's motions were less efficient in sight-reading tasks than is repertoire tasks.


Sign in / Sign up

Export Citation Format

Share Document