music perception
Recently Published Documents


TOTAL DOCUMENTS

636
(FIVE YEARS 164)

H-INDEX

50
(FIVE YEARS 4)

2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Zhuo Wang ◽  
Zhenjiang Zhao ◽  
Lujia Wei

In order to effectively improve the sense of difference brought by the extracorporeal machine to users and minimize the related derived problems, the implementation based on embedded multisensor has become a major breakthrough in the research of cochlear implant. To explore the impact of different cultural differences on timbre perception, effectively evaluate the correlation between cultural differences and music perception teaching based on embedded multisensor normal hearing, evaluate the discrimination ability of embedded multisensor normal hearing to music timbre, and analyse the correlation between cultural differences and timbre perception, it provides a basis for the evaluation of music perception of normal hearing people with embedded multisensor and the design and development of evaluation tool. In this paper, adults with normal hearing in different cultures matched with music experience are selected to test their recognition ability of different musical instruments and the number of musical instruments by using music evaluation software, and the recognition accuracy of the two tests is recorded. The results show that the accuracy of musical instrument recognition in the mother tongue group is 15% higher than that in the foreign language group, and the average recognition rates of oboe, trumpet, and xylophone in the foreign language group are lower than those in the mother tongue group, the recognition rate of oboe and trumpet in wind instruments was low in both groups, and the recognition rate of oboe and trumpet in foreign language group was high.


2022 ◽  
Vol 15 ◽  
Author(s):  
Johanna M. Rimmele ◽  
Pius Kern ◽  
Christina Lubinus ◽  
Klaus Frieler ◽  
David Poeppel ◽  
...  

Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.


Author(s):  
Ana Llorens

Research on intonation has mainly sought for classifying and/or expressive explanations for performers’ strategies. In the field of music psychology and music perception, such explanations have been explored in terms of interval direction, size, or type; in the field of performance analysis, to which this article belongs, investigation on intonation has been not only scarce but also limited to short excerpts. In this context, this article explores Pau Casals’ intonational practice specific to his recording of Bach’s E flat major prelude for solo cello. To do so, on the basis of exact empirical measurements, it places such practice alongside the cellist’s conscious, theoretical recommendations apropos what he called “expressive” string intonation, showing that the interpretation of the latter should is not straightforward. It also proposes several reference points and tuning systems which could serve as models for Casals’ practice and looks for explanations beyond simple interval classification. In this manner, it ultimately proposes a structural function for intonation, in partnership with tempo and dynamics. Similarly, it understands Casals’ intonational practice not as a choice between but as a compromise for multiple options in tuning systems (mostly equal temperament and Pythagorean tuning), reference points (the fundamental note of the chord and the immediately preceding tone), the nature of the compositional materials (harmonic and melodic), and, most importantly, structure and expression.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jing Xue

In order to improve the classification accuracy and reliability of emotional state assessment and provide support and help for music therapy, this paper proposes an EEG analysis method based on wavelet transform under the stimulation of music perception. Using the data from the multichannel standard emotion database (DEAP), α, ß, and θ rhythms are extracted in frontal (F3 and F4), temporal (T7 and T8), and central (C3 and C4) channels with wavelet transform. EMD is performed on the extracted EEG rhythm to obtain intrinsic mode function (IMF) components, and then, the average energy and amplitude difference eigenvalues of IMF components of EEG rhythm waves are further extracted, that is, each rhythm wave contains three average energy characteristics and two amplitude difference eigenvalues so as to fully extract EEG feature information. Finally, emotional state evaluation is realized based on a support vector machine classifier. The results show that the correct rate between no emotion, positive emotion, and negative emotion can reach more than 90%. Among the pairwise classification problems among the four emotions selected, the classification accuracy obtained by this EEG feature extraction method is higher than that obtained by general feature extraction methods, which can reach about 70%. Changes in EEG α wave power were closely correlated with the polarity and intensity of emotion; α wave power varied significantly between “happiness and fear,” “pleasure and fear,” and “fear and sadness.” It has a good application prospect in both psychological and physiological research of emotional perception and practical application.


2021 ◽  
Vol 15 ◽  
Author(s):  
Xiulin Wang ◽  
Wenya Liu ◽  
Xiaoyu Wang ◽  
Zhen Mu ◽  
Jing Xu ◽  
...  

Ongoing electroencephalography (EEG) signals are recorded as a mixture of stimulus-elicited EEG, spontaneous EEG and noises, which poses a huge challenge to current data analyzing techniques, especially when different groups of participants are expected to have common or highly correlated brain activities and some individual dynamics. In this study, we proposed a data-driven shared and unshared feature extraction framework based on nonnegative and coupled tensor factorization, which aims to conduct group-level analysis for the EEG signals from major depression disorder (MDD) patients and healthy controls (HC) when freely listening to music. Constrained tensor factorization not only preserves the multilinear structure of the data, but also considers the common and individual components between the data. The proposed framework, combined with music information retrieval, correlation analysis, and hierarchical clustering, facilitated the simultaneous extraction of shared and unshared spatio-temporal-spectral feature patterns between/in MDD and HC groups. Finally, we obtained two shared feature patterns between MDD and HC groups, and obtained totally three individual feature patterns from HC and MDD groups. The results showed that the MDD and HC groups triggered similar brain dynamics when listening to music, but at the same time, MDD patients also brought some changes in brain oscillatory network characteristics along with music perception. These changes may provide some basis for the clinical diagnosis and the treatment of MDD patients.


2021 ◽  
pp. 196-214
Author(s):  
Rolf Inge Godøy

We may typically experience music as continuous streams of sound and associated body motion, yet we may also perceive music as sequences of more discontinuous events, or as strings of chunks with multimodal sensations of sound and body motion, chunks that can be called ‘sound-motion objects’. The focus in this chapter is on how such sound-motion objects emerge at intermittent points in time called ‘musical instants’, and how musical instants are necessary in order to perceive salient features in music such as of timbre, pitch, texture, contour, and overall stylistic and affective features. The emergence of musical instants is also understood as based on the combined constraints of musical instruments, sound-producing body motion, and music perception, also suggesting that understanding musical instants may have practical applications in making music.


2021 ◽  
pp. 161-165
Author(s):  
Daniel J. Levitin ◽  
Lindsay A. Fleming

Although much is known about the brain mechanisms underlying music perception and cognition, there is much work to be done in understanding aesthetic responses to music: Why does music make us feel the way we do? Why does it make us feel anything? In the article under discussion, the authors suggest that the brain’s own endogenous opioids mediate musical emotion, using the hypothesis of naltrexone-induced musical anhedonia. They conclude that endogenous opioids are critical to experiencing both positive and negative emotions in music and that music uses the same reward pathways as food, drugs, and sexual pleasure. Their findings add to the growing body of evidence for the evolutionary biological substrates of music.


Author(s):  
Thanh Phuong Anh Truong ◽  
Briana Applewhite ◽  
Annie Heiderscheit ◽  
Hubertus Himmerich

Obsessive-compulsive disorder (OCD) is a severe psychiatric disorder, which can be associated with music-related symptoms. Music may also be used as an adjunct treatment for OCD. Following the PRISMA guidelines, we performed a systematic literature review exploring the relationship between music and OCD by using three online databases: PubMed, the Web of Science, and PsycINFO. The search terms were “obsessive compulsive disorder”, “OCD”, “music”, and “music therapy”. A total of 27 articles were utilised (n = 650 patients/study participants) and grouped into three categories. The first category comprised case reports of patients with musical obsessions in patients with OCD. Most patients were treated with selective serotonin reuptake inhibitors (SSRIs) or a combination of an SSRI and another pharmacological or a non-pharmacological treatment, with variable success. Studies on the music perception of people with OCD or obsessive-compulsive personality traits represented the second category. People with OCD or obsessive-compulsive personality traits seem to be more sensitive to tense music and were found to have an increased desire for harmony in music. Three small studies on music therapy in people with OCD constituted the third category. These studies suggest that patients with OCD might benefit from music therapy, which includes listening to music.


2021 ◽  
Vol 16 (5) ◽  
pp. 2261-2276
Author(s):  
Demet Aydınlı Gurler

The purpose of this research is to examine the pop music metaphors developed by high school students. In this study, the phenomenology model, which is one of the qualitative research methods, has been used. A total of 650 students from a public school in Province Ankara, District Mamak, attended the study during the spring semester of the 2020/2021 academic year. The data were collected through a form containing the question ‘Pop music is like... because...’. The data were analysed through content analysis. Based on the 324 different metaphors created by high school students, these metaphors are classified into 13 categories based on their shared characteristics. Students’ gender and their conditions whether they play an instrument or not were considered when categorising them. According to the findings of the study, high school students described pop music as the most energising/friendly genre. The study also demonstrates that metaphors can be an effective tool for eliciting students’ perceptions of pop music.   Keywords: High school students, metaphor, music, perception of pop music.


Sign in / Sign up

Export Citation Format

Share Document