musical structure
Recently Published Documents


TOTAL DOCUMENTS

404
(FIVE YEARS 117)

H-INDEX

27
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Dana L Boebinger ◽  
Sam V Norman-Haignere ◽  
Josh H McDermott ◽  
Nancy G Kanwisher

Converging evidence suggests that neural populations within human non-primary auditory cortex respond selectively to music. These neural populations respond strongly to a wide range of music stimuli, and weakly to other natural sounds and to synthetic control stimuli matched to music in many acoustic properties, suggesting that they are driven by high-level musical features. What are these features? Here we used fMRI to test the extent to which musical structure in pitch and time contribute to music-selective neural responses. We used voxel decomposition to derive music-selective response components in each of 15 participants individually, and then measured the response of these components to synthetic music clips in which we selectively disrupted musical structure by scrambling either the note pitches and/or onset times. Both types of scrambling produced lower responses compared to when melodic or rhythmic structure was intact. This effect was much stronger in the music-selective component than in the other response components, even those with substantial spatial overlap with the music component. We further found no evidence for any cortical regions sensitive to pitch but not time structure, or vice versa. Our results suggest that the processing of melody and rhythm are intertwined within auditory cortex.


2021 ◽  
Vol 5 (2) ◽  
pp. 249
Author(s):  
Joko Suprayitno ◽  
Ayub Prasetiyo

AbstrakIndonesia memiliki kekayaan lagu rakyat yang beragam sesuai keberadaan suku-suku yang tersebar dari Sabang sampai Merauke. Warisan budaya yang tak ternilai ini tidak hanya perlu dilestarikan, tapi juga diberi langkah strategis agar dapat berkembang dan dikenal lebih jauh. Dalam konteks ini, O Ina Ni Keke, sebuah lagu rakyat dari Sulawesi Utara, telah menjadi repertoar standar orkestra yang mendunia. Penelitian ini bertujuan mengetahui bagaimana komposisi struktur musikal yang diciptakan oleh Joko suprayitno untuk lagu sederhana khas lagu rakyat seperti O Ina Ni Keke mengubah lagu tersebut menjadi kelindan melodi, harmoni, tekstur, dan struktur elemen musikal lainnya dan pada akhirnya menjadi sebuah karya yang pernah dimainkan oleh Shanghai Philharmonic Orchestra. Penelitian ini menggunakan metode penelitian kualitatif dengan paparan deskriptif. Proses analisis menggunakan analisis teoretis musikologis atas bentukan struktur elemen musikal dalam aransemen lagu O Ina Ni Keke. Pendalaman proses analisis akan ditunjang oleh sumber-sumber tertulis seperti buku-buku komposisi musik dan juga notasi atau score hasil aransemen sebagai data pokok dalam proses analisis. Penelitian ini menemukan penggunaan variasi melodi kontrapungtal, penempatan melodi pokok di hampir semua instrumen musik yang memunculkan karakter bunyi yang berbeda-beda, dan penggunaan teknik pedal point.AbstractIndonesia has a wealth of folk songs that vary according to the existence of tribes that spread from Sabang to Merauke. This valueless cultural heritage should not only be preserved but also need strategic steps to strive for it to develop and be known further. From a folk song from North Sulawesi to a global standard orchestra repertoire. This study aims to find out how to composed the musical structure of simple songs typical of folk songs such as the song O Ina Ni Keke by Joko Suprayitno into a combination of melodies, harmonies, textures and other musical elements into a masterpiece that was once played by the Shanghai Philharmonic Orchestra during a concert at the Shanghai Philharmonic Orchestra. Simfonia Hall Jakarta in the framework of the Fundraising Concert for Palu & Donggala Tsunami Victims. This research uses qualitative research with descriptive exposure. The analysis process uses musicological theoretical analysis of the formation of musical elements in the arrangement of the song O Ina Ni Keke. The deepening of the analysis process will be supported by written sources such as music composition books and of course the notation or score of the arrangement as the main data in the analysis process. The results of the study found that the use of contrapuntal melody variations, the placement of the main melody in almost all instruments gave rise to different characters, and the use of the pedal point technique


2021 ◽  
Vol 27 (4) ◽  
Author(s):  
Jocelyn Ho

The music of Tōru Takemitsu’s Rain Tree Sketch II (1994) entails a procession of discrete gestures that are delineated by moments of repose. The performer’s grasp of the piece lies in its physicality of movement: each gesture and in-between stillness are both heard and felt as an aggregate of velocities, directions, and intentions of the body. Drawing upon Carrie Noland’s concept of “vitality affects,” I take the performative gesture, encompassing both visually accessible movement and inwardly felt kinesthesia, as a starting point for the analysis of Rain Tree Sketch II. Concepts of effort and shape taken from Rudolf Laban’s dance theory provide a framework for creating a new methodology of enhanced trace-forms to analyze gesture and kinesthesia. The analysis of gestures reveals the coexistence of opposite effort qualities and shapes in an expanded corporeal space, resonating with Takemitsu’s ideal of reconciling contradictory sounds, as noted in his collection of essays Confronting Silence (1995). Husserl’s notions of retention and protention, viewed through the lens of embodiment, and Laban’s concepts of effort states and effort recovery are brought to bear on the still moments, showing the piece to have a throbbing, embodied rhythmic structural arc. This new methodology centering on gestural-kinesthetic details provides the tools to articulate structural sensations that are often overlooked but lie at the center of musical experience.


2021 ◽  
Vol 39 (2) ◽  
pp. 118-144
Author(s):  
Andrew Goldman ◽  
Peter M. C. Harrison ◽  
Tyreek Jackson ◽  
Marcus T. Pearce

Electroencephalographic responses to unexpected musical events allow researchers to test listeners’ internal models of syntax. One major challenge is dissociating cognitive syntactic violations—based on the abstract identity of a particular musical structure—from unexpected acoustic features. Despite careful controls in past studies, recent work by Bigand, Delbe, Poulin-Carronnat, Leman, and Tillmann (2014) has argued that ERP findings attributed to cognitive surprisal cannot be unequivocally separated from sensory surprisal. Here we report a novel EEG paradigm that uses three auditory short-term memory models and one cognitive model to predict surprisal as indexed by several ERP components (ERAN, N5, P600, and P3a), directly comparing sensory and cognitive contributions. Our paradigm parameterizes a large set of stimuli rather than using categorically “high” and “low” surprisal conditions, addressing issues with past work in which participants may learn where to expect violations and may be biased by local context. The cognitive model (Harrison & Pearce, 2018) predicted higher P3a amplitudes, as did Leman’s (2000) model, indicating both sensory and cognitive contributions to expectation violation. However, no model predicted ERAN, N5, or P600 amplitudes, raising questions about whether traditional interpretations of these ERP components generalize to broader collections of stimuli or rather are limited to less naturalistic stimuli.


2021 ◽  
Vol 27 (4) ◽  
Author(s):  
Ben Duinker

This paper presents a comparative recording analysis of the seminal work for solo percussion Rebonds (Iannis Xenakis, 1989), in order to demonstrate how performances of a musical work can reveal—or even create—aspects of musical structure that score-centered analysis cannot illuminate. In doing so I engage with the following questions. What does a pluralistic, dynamic conception of structure look like for Rebonds? How do interpretive decisions recast performers as agents of musical structure? When performances diverge from the score in the omission of notes, the softening of accents, the insertion of dramatic tempo changes, or the altering of entire passages, do conventions that arise out of those performance practices become part of the structural fabric of the work? Are these conventions thus part of the Rebonds “text”?


2021 ◽  
Vol 12 ◽  
Author(s):  
Robert R. McCrae

Some accounts of the evolution of music suggest that it emerged from emotionally expressive vocalizations and serves as a necessary counterweight to the cognitive elaboration of language. Thus, emotional expression appears to be intrinsic to the creation and perception of music, and music ought to serve as a model for affect itself. Because music exists as patterns of changes in sound over time, affect should also be seen in patterns of changing feelings. Psychologists have given relatively little attention to these patterns. Results from statistical approaches to the analysis of affect dynamics have so far been modest. Two of the most significant treatments of temporal patterns in affect—sentics and vitality affects have remained outside mainstream emotion research. Analysis of musical structure suggests three phenomena relevant to the temporal form of emotion: affect contours, volitional affects, and affect transitions. I discuss some implications for research on affect and for exploring the evolutionary origins of music and emotions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Neta B. Maimon ◽  
Dominique Lamy ◽  
Zohar Eitan

AbstractIncreasing evidence has uncovered associations between the cognition of abstract schemas and spatial perception. Here we examine such associations for Western musical syntax, tonality. Spatial metaphors are ubiquitous when describing tonality: stable, closural tones are considered to be spatially central and, as gravitational foci, spatially lower. We investigated whether listeners, musicians and nonmusicians, indeed associate tonal relationships with visuospatial dimensions, including spatial height, centrality, laterality, and size, implicitly or explicitly, and whether such mappings are consistent with established metaphors. In the explicit paradigm, participants heard a tonality-establishing prime followed by a probe tone and coupled each probe with a subjectively appropriate location (Exp.1) or size (Exp.4). The implicit paradigm used a version of the Implicit Association Test to examine associations of tonal stability with vertical position (Exp.2), lateral position (Exp3) and size (Exp.5). Tonal stability was indeed associated with perceived physical space: the spatial distances between the locations associated with different scale-degrees significantly correlated with the tonal stability differences between these scale-degrees. However, inconsistently with musical discourse, stable tones were associated with leftward (instead of central) and higher (instead of lower) spatial positions. We speculate that these mappings are influenced by emotion, embodying the “good is up” metaphor, and by the spatial structure of music keyboards. Taken together, the results demonstrate a new type of cross-modal correspondence and a hitherto under-researched connotative function of musical structure. Importantly, the results suggest that the spatial mappings of an abstract domain may be independent of the spatial metaphors used to describe that domain.


2021 ◽  
Vol 7 ◽  
pp. e785
Author(s):  
Liang Xu ◽  
Zaoyi Sun ◽  
Xin Wen ◽  
Zhengxi Huang ◽  
Chi-ju Chao ◽  
...  

Melody and lyrics, reflecting two unique human cognitive abilities, are usually combined in music to convey emotions. Although psychologists and computer scientists have made considerable progress in revealing the association between musical structure and the perceived emotions of music, the features of lyrics are relatively less discussed. Using linguistic inquiry and word count (LIWC) technology to extract lyric features in 2,372 Chinese songs, this study investigated the effects of LIWC-based lyric features on the perceived arousal and valence of music. First, correlation analysis shows that, for example, the perceived arousal of music was positively correlated with the total number of lyric words and the mean number of words per sentence and was negatively correlated with the proportion of words related to the past and insight. The perceived valence of music was negatively correlated with the proportion of negative emotion words. Second, we used audio and lyric features as inputs to construct music emotion recognition (MER) models. The performance of random forest regressions reveals that, for the recognition models of perceived valence, adding lyric features can significantly improve the prediction effect of the model using audio features only; for the recognition models of perceived arousal, lyric features are almost useless. Finally, by calculating the feature importance to interpret the MER models, we observed that the audio features played a decisive role in the recognition models of both perceived arousal and perceived valence. Unlike the uselessness of the lyric features in the arousal recognition model, several lyric features, such as the usage frequency of words related to sadness, positive emotions, and tentativeness, played important roles in the valence recognition model.


Sign in / Sign up

Export Citation Format

Share Document