scholarly journals Perception of Rhythmic Similarity is Asymmetrical, and Is Influenced by Musical Training, Expressive Performance, and Musical Context

2017 ◽  
Vol 5 (3-4) ◽  
pp. 211-227 ◽  
Author(s):  
Daniel Cameron ◽  
Keith Potter ◽  
Geraint Wiggins ◽  
Marcus Pearce

Rhythm is an essential part of the structure, behaviour, and aesthetics of music. However, the cognitive processing that underlies the perception of musical rhythm is not fully understood. In this study, we tested whether rhythm perception is influenced by three factors: musical training, the presence of expressive performance cues in human-performed music, and the broader musical context. We compared musicians and nonmusicians’ similarity ratings for pairs of rhythms taken from Steve Reich’s Clapping Music. The rhythms were heard both in isolation and in musical context and both with and without expressive performance cues. The results revealed that rhythm perception is influenced by the experimental conditions: rhythms heard in musical context were rated as less similar than those heard in isolation; musicians’ ratings were unaffected by expressive performance, but nonmusicians rated expressively performed rhythms as less similar than those with exact timing; and expressively-performed rhythms were rated as less similar compared to rhythms with exact timing when heard in isolation but not when heard in musical context. The results also showed asymmetrical perception: the order in which two rhythms were heard influenced their perceived similarity. Analyses suggest that this asymmetry was driven by the internal coherence of rhythms, as measured by normalized Pairwise Variability Index (nPVI). As predicted, rhythms were perceived as less similar when the first rhythm in a pair had greater coherence (lower nPVI) than the second rhythm, compared to when the rhythms were heard in the opposite order.

2022 ◽  
Vol 8 (1) ◽  
pp. 153-170
Author(s):  
David Temperley

This review presents a highly selective survey of connections between music and language. I begin by considering some fundamental differences between music and language and some nonspecific similarities that may arise out of more general characteristics of human cognition and communication. I then discuss an important, specific interaction between music and language: the connection between linguistic stress and musical meter. Next, I consider several possible connections that have been widely studied but remain controversial: cross-cultural correlations between linguistic and musical rhythm, effects of musical training on linguistic abilities, and connections in cognitive processing between music and linguistic syntax. Finally, I discuss some parallels regarding the use of repetition in music and language, which until now has been a little-explored topic.


2018 ◽  
Author(s):  
Brett Myers ◽  
Chloe Vaughan ◽  
Uma Soman ◽  
Scott Blain ◽  
Kylie Korsnack ◽  
...  

AbstractA sizeable literature has shown that perception of prosodic elements bolsters speech comprehension across developmental stages; recent work also suggests that variance in musical aptitude predicts individual differences in prosody perception in adults. The current study investigates brain and behavioral methods of assessing prosody perception and tests the relationship with musical rhythm perception in 35 school-aged children (age range: 5;5 to 8;0 years, M = 6;7 years, SD = 10 months; 18 females). We applied stimulus reconstruction, a technique for analyzing EEG data by fitting a temporal response function that maps the neural response back to the sensory stimulus. In doing so, we obtain a measure of neural encoding of the speech envelope in passive listening to continuous narratives. We also present a behavioral prosody assessment that requires holistic judgments of filtered speech. The results from these typically developing children revealed that individual differences in stimulus reconstruction in the delta band, indexing neural synchrony to the speech envelope, are significantly related to individual differences in behavioral measurement of prosody perception. In addition, both of these measures are moderately to strongly correlated with musical rhythm perception skills. Results support a domain-general mechanism for cognitive processing of speech and music.Graphical Abstract


2021 ◽  
pp. 1-7
Author(s):  
Vasudha Hande ◽  
Shantala Hegde

BACKGROUND: A specific learning disability comes with a cluster of deficits in the neurocognitive domain. Phonological processing deficits have been the core of different types of specific learning disabilities. In addition to difficulties in phonological processing and cognitive deficits, children with specific learning disability (SLD) are known to also found have deficits in more innate non-language-based skills like musical rhythm processing. OBJECTIVES: This paper reviews studies in the area of musical rhythm perception in children with SLD. An attempt was made to throw light on beneficial effects of music and rhythm-based intervention and their underlying mechanism. METHODS: A hypothesis-driven review of research in the domain of rhythm deficits and rhythm-based intervention in children with SLD was carried out. RESULTS: A summary of the reviewed literature highlights that music and language processing have shared neural underpinnings. Children with SLD in addition to difficulties in language processing and other neurocognitive deficits are known to have deficits in music and rhythm perception. This is explained in the background of deficits in auditory skills, perceptuo-motor skills and timing skills. Attempt has been made in the field to understand the effect of music training on the children’s auditory processing and language development. Music and rhythm-based intervention emerges as a powerful intervention method to target language processing and other neurocognitive functions. Future studies in this direction are highly underscored. CONCLUSIONS: Suggestions for future research on music-based interventions have been discussed.


2021 ◽  
Author(s):  
Iris Mencke ◽  
David Ricardo Quiroga-Martinez ◽  
Diana Omigie ◽  
Franz Schwarzacher ◽  
Niels T Haumann ◽  
...  

Predictive models in the brain rely on the continuous extraction of regularities from the environment. These models are thought to be updated by novel information, as reflected in prediction error responses such as the mismatch negativity (MMN). However, although in real life individuals often face situations in which uncertainty prevails, it remains unclear whether and how predictive models emerge in high-uncertainty contexts. Recent research suggests that uncertainty affects the magnitude of MMN responses in the context of music listening. However, musical predictions are typically studied with MMN stimulation paradigms based on Western tonal music, which are characterized by relatively high predictability. Hence, we developed an MMN paradigm to investigate how the high uncertainty of atonal music modulates predictive processes as indexed by the MMN and behavior. Using MEG in a group of 20 subjects without musical training, we demonstrate that the magnetic MMN in response to pitch, intensity, timbre, and location deviants is evoked in both tonal and atonal melodies, with no significant differences between conditions. In contrast, in a separate behavioral experiment involving 39 non-musicians, participants detected pitch deviants more accurately and rated confidence higher in the tonal than in the atonal musical context. These results indicate that contextual tonal uncertainty modulates processing stages in which conscious awareness is involved, although deviants robustly elicit low-level pre-attentive responses such as the MMN. The achievement of robust MMN responses, despite high tonal uncertainty, is relevant for future studies comparing groups of listeners' MMN responses to increasingly ecological music stimuli.


2019 ◽  
Vol 61 (5) ◽  
pp. 502-517
Author(s):  
Mirahmad Amirshahi ◽  
Samira Jafari Dizicheh ◽  
Rick T Wilson

Companies frequently place out-of-home advertisements in locations hoping their brand becomes associated with that environment’s favorable attributes. However, prior research using U.S. subjects suggested that these associative benefits do not actually transfer onto the advertised brand. We faithfully replicate this earlier research using a non-Western sample and find that culturally based communication and cognitive processing models may explain the lack of results in the earlier study and affirmative results in our study. Three experimental conditions are used: single exposure, multiple exposures, and high involvement. We find that a billboard’s external environment does influence brand evaluations but only in the single-exposure condition. A possible explanation for why results were not evident in the multiple exposure and high involvement conditions may be related to the amount of message elaboration across study conditions.


2001 ◽  
Vol 4 (3) ◽  
pp. 249-274 ◽  
Author(s):  
Brian M. Friel ◽  
Shelia M. Kennison

We investigated 563 German–English nouns for the purposes of identifying cognates, false cognates and non-cognates. Two techniques for identifying cognates were used and compared: (i) De Groot and Nas's (1991) similarity-rating technique and (ii) a translation-elicitation task similar to that of Kroll and Stewart (1994). The results obtained with English-speaking participants produced 112 cognates, 94 false cognates, and 357 non cognates and indicated that the two techniques yielded similar findings. Rated similarity of German–English translation pairs and translation accuracy were positively correlated. We also investigated whether the presence of German-specific characters and the availability of German pronunciation information influenced similarity ratings and translation accuracy. Ratings for translation pairs in which the German word contained a language-specific character were lower and the word was translated less accurately. Participants provided with pronunciation information rated German–English translation pairs as being more similar and translated German words correctly more often than participants who did not receive pronunciation information. We also report the relationships among word frequency, rated imageability and the performance measures. The resulting database of information is intended to be a resource for researchers interested in cognitive processing in German–English bilinguals.


2008 ◽  
Vol 22 (4) ◽  
pp. 175-184 ◽  
Author(s):  
Wolfgang Skrandies ◽  
Nicole Reuther

We aimed at elucidating the relationship between odor, taste, color, and food stimuli where subjects were studied either with questionnaires or in electrophysiological experiments. First, a total of 144 word pairs were rated by 660 subjects who determined whether the first stimulus (odor or taste word) matched the second one (color or food word). In an electrophysiological experiment, EEG was recorded from 30 electrodes in 24 healthy adults while clearly matching, or nonmatching, word pairs were presented on a monitor. Evoked potentials were computed for different stimulus classes (matching or nonmatching combinations of odor or taste and color or food words). Six components were identified and compared between conditions. For most components, field strength (GFP) was lower for nonmatching than for matching word pairs. In addition to late effects, electrical brain activity was influenced by experimental conditions as early as at 100 ms latency. Most effects observed were in the time range between 100 and 250 ms. Our data show how color and food words are differently affected when paired with odor or taste words. Complex interactions between stimulus modality (taste/odor) and different target words (color/food) occurred depending on whether the pairs were seen by the subjects as appropriate or inappropriate. Topographical effects indicated that different neural populations were activated in different conditions. Most interestingly, there were many cognitive effects occurring quite early (on the order of 100 ms) after stimulus presentation, and our results suggest rapid cognitive processing of information on odor, taste, color, and food items. This is an important prerequisite for the preconscious and fast choice of food items in everyday behavior, and the data confirm earlier findings on rapid and preconscious semantic processing in the visual cortex.


2019 ◽  
Author(s):  
Rafael Román-Caballero ◽  
Elisa Martín-Arévalo ◽  
Juan Lupiáñez

Previous literature has shown cognitive improvements related to musical training. Attention is one of the functions in which musicians exhibit improvements compared to non-musicians. However, previous studies show inconsistent results regarding certain attentional processes, suggesting that benefits associated with musical training appear only in some processes. The aim of the present study was to investigate attentional and vigilance abilities in expert musicians with a fine-grained measure: the ANTI-Vea (ANT for Interactions and Vigilance − executive and arousal components; Luna, Marino, Roca, & Lupiáñez, 2018). This task allows measuring the three Posner and Petersen’s networks (alerting, orienting, and executive control) along with two different components of vigilance (executive and arousal vigilance). Using propensity-score matching, 49 adult musicians (18-35 years old) were matched in an extensive set of confounds with a control group of 49 non-musicians. Musicians showed advantages in processing speed and in the two components of vigilance. Interestingly, improvements were related to characteristics of musical experience; in particular: years of practice and years of lessons. One possible explanation for these results is that musical training can specifically enhance some aspects of attention, although our correlational design does not allow to rule out other possibilities such as the presence of cognitive differences prior to the onset of training. Moreover, the advantages were observed in an extra-musical context, which suggests that musical training could transfer its benefits to cognitive processes loosely related to musical skills. The absence of effects in executive control, frequently reported in previous literature, is discussed based on our extensive control of confounds.


2010 ◽  
Vol 21 (01) ◽  
pp. 028-034 ◽  
Author(s):  
Kate Gfeller ◽  
Dingfeng Jiang ◽  
Jacob J. Oleson ◽  
Virginia Driscoll ◽  
John F. Knutson

Background: An extensive body of literature indicates that cochlear implants (CIs) are effective in supporting speech perception of persons with severe to profound hearing losses who do not benefit to any great extent from conventional hearing aids. Adult CI recipients tend to show significant improvement in speech perception within 3 mo following implantation as a result of mere experience. Furthermore, CI recipients continue to show modest improvement as long as 5 yr postimplantation. In contrast, data taken from single testing protocols of music perception and appraisal indicate that CIs are less than ideal in transmitting important structural features of music, such as pitch, melody, and timbre. However, there is presently little information documenting changes in music perception or appraisal over extended time as a result of mere experience. Purpose: This study examined two basic questions: (1) Do adult CI recipients show significant improvement in perceptual acuity or appraisal of specific music listening tasks when tested in two consecutive years? (2) If there are tasks for which CI recipients show significant improvement with time, are there particular demographic variables that predict those CI recipients most likely to show improvement with extended CI use? Research Design: A longitudinal cohort study. Implant recipients return annually for visits to the clinic. Study Sample: The study included 209 adult cochlear implant recipients with at least 9 mo implant experience before their first year measurement. Data Collection and Analysis: Outcomes were measured on the patient's annual visit in two consecutive years. Paired t-tests were used to test for significant improvement from one year to the next. Those variables demonstrating significant improvement were subjected to regression analyses performed to detect the demographic variables useful in predicting said improvement. Results: There were no significant differences in music perception outcomes as a function of type of device or processing strategy used. Only familiar melody recognition (FMR) and recognition of melody excerpts with lyrics (MERT-L) showed significant improvement from one year to the next. After controlling for the baseline value, hearing aid use, months of use, music listening habits after implantation, and formal musical training in elementary school were significant predictors of FMR improvement. Bilateral CI use, formal musical training in high school and beyond, and a measure of sequential cognitive processing were significant predictors of MERT-L improvement. Conclusion: These adult CI recipients as a result of mere experience demonstrated fairly consistent music perception and appraisal on measures gathered in two consecutive years. Gains made tend to be modest, and can be associated with characteristics such as use of hearing aids, listening experiences, or bilateral use (in the case of lyrics). These results have implications for counseling of CI recipients with regard to realistic expectations and strategies for enhancing music perception and enjoyment.


Sign in / Sign up

Export Citation Format

Share Document