scholarly journals Decoding the Musical Message via the Structural Analogy between Verbal and Musical Language

2018 ◽  
Vol 18 (1) ◽  
pp. 151-160
Author(s):  
Rosina Caterina Filimon

Abstract The topic approached in this paper aims to identify the structural similarities between the verbal and the musical language and to highlight the process of decoding the musical message through the structural analogy between them. The process of musical perception and musical decoding involves physiological, psychological and aesthetic phenomena. Besides receiving the sound waves, it implies complex cognitive processes being activated, whose aim is to decode the musical material at cerebral level. Starting from the research methods in cognitive psychology, music researchers redefine the process of musical perception in a series of papers in musical cognitive psychology. In the case of the analogy between language and music, deciphering the musical structure and its perception are due, according to researchers, to several common structural configurations. A significant model for the description of the musical structure is Noam Chomsky’s generative-transformational model. This claimed that, at a deep level, all languages have the same syntactic structure, on account of innate anatomical and physiological structures which became specialized as a consequence of the universal nature of certain mechanisms of the human intellect. Chomsky’s studies supported by sophisticated experimental devices, computerised analyses and algorithmic models have identified the syntax of the musical message, as well as the rules and principles that underlie the processing of sound-related information by the listener; this syntax, principles and rules show surprising similarities with the verbal language. The musicologist Heinrich Schenker, 20 years ahead of Chomsky, considers that there is a parallel between the analysis of natural language and that of the musical structure, and has developed his own theory on the structure of music. Schenker’s structural analysis is based on the idea that tonal music is organized hierarchically, in a layering of structural levels. Thus, spoken language and music are governed by common rules: phonology, syntax and semantics. Fred Lerdahl and Ray Jackendoff develop a musical grammar where a set of generating rules are defined to explain the hierarchical structure of tonal music. The authors of the generative theory propose the hypothesis of a musical grammar based on two types of rules, which take into account the conscious and unconscious principles that govern the organization of the musical perception. The structural analogy between verbal and musical language consists of several common elements. Among those is the hierarchical organization of both fields, a governance by the same rules – phonology, syntax, semantics – and as a consequence of the universal nature of certain mechanisms of the human intellect, decoding the transmitted message is accomplished thanks to some universal innate structures, biologically inherited. Also, according to Chomsky's linguistics model a musical grammar is configured, one governed by wellformed rules and preference rules. Thus, a musical piece is not perceived as a stream of disordered sounds, but it is deconstructed, developed and assimilated at cerebral level by means of cognitive pre-existing schemes.

2021 ◽  
Vol 11 (2) ◽  
pp. 159
Author(s):  
Almudena González ◽  
Manuel Santapau ◽  
Antoni Gamundí ◽  
Ernesto Pereda ◽  
Julián J. González

The present work aims to demonstrate the hypothesis that atonal music modifies the topological structure of electroencephalographic (EEG) connectivity networks in relation to tonal music. To this, EEG monopolar records were taken in musicians and non-musicians while listening to tonal, atonal, and pink noise sound excerpts. EEG functional connectivities (FC) among channels assessed by a phase synchronization index previously thresholded using surrogate data test were computed. Sound effects, on the topological structure of graph-based networks assembled with the EEG-FCs at different frequency-bands, were analyzed throughout graph metric and network-based statistic (NBS). Local and global efficiency normalized (vs. random-network) measurements (NLE|NGE) assessing network information exchanges were able to discriminate both music styles irrespective of groups and frequency-bands. During tonal audition, NLE and NGE values in the beta-band network get close to that of a small-world network, while during atonal and even more during noise its structure moved away from small-world. These effects were attributed to the different timbre characteristics (sounds spectral centroid and entropy) and different musical structure. Results from networks topographic maps for strength and NLE of the nodes, and for FC subnets obtained from the NBS, allowed discriminating the musical styles and verifying the different strength, NLE, and FC of musicians compared to non-musicians.


Author(s):  
Elisabete M. de Sousa ◽  

The present essay presents the content of the landmarks that punctuate the long dialogue between verbal language and musical language during the 19th Century, by means of examples taken from the critical and theoretical writings of Hector Berlioz, Robert Schumann and Richard Wagner. In the search for the dramatic essence of music, such dialogue took different forms: the possibility of verbal language being translated by musical language, the pre-existence of a musical-poetic idea in any musical composition, eventually contributing to the appearance of program music, and finally, the principles presiding over Wagner’s Gesamtkunstwerk. Special emphasis is given to Richard Wagner’s Parisian article De l'Ouverture (1841), as well as to the impact on Soren Kierkegaard.


Author(s):  
Botond SZOCS

The paper aims to compare musical language with verbal language, creating a new perspective on music and natural language. The three categories of linguistics, phonology, syntax and semantics are analyzed. Bernstein highlights the analogies between the linguistic categories and music, researching the same three components of linguistics in music. The possibility of applying the transformational grammar procedures to the musical text is studied. In the second part of the paper, the authors investigate the method of analysis based on harmony and counterpoint, differentiating several structural levels conceived by the theoretical musician H. Schenker. Schenkerian analyzes are a relatively recent appearance in the field of musical analysis, which proposes as an innovation in the field of musical analysis the structural vision of musical discourse.


2014 ◽  
Vol 20 (3) ◽  
Author(s):  
Jordan B. L. Smith ◽  
Isaac Schankler ◽  
Elaine Chew

Some important theories of music cognition, such as Lerdahl and Jackendoff’s (1983)A Generative Theory of Tonal Music, posit an archetypal listener with an ideal interpretation of musical structure, and many studies of the perception of structure focus on what different listeners have in common. However, previous experiments have revealed that listeners perceive musical structure differently, depending upon their music background and their familiarity with the piece. It is not known what other factors contribute to differences among listeners’ formal analyses, but understanding these factors may be essential to advancing our understanding of music perception.We present a case study of two listeners, with the goal of identifying the differences between their analyses, and explaining why these differences arose. These two listeners analyzed the structure of three performances, a set of improvised duets. The duets were performed by one of the listeners and Mimi (Multimodal Interaction for Musical Improvisation), a software system for human-machine improvisation. The ambiguous structure of the human-machine improvisations as well as the distinct perspectives of the listeners ensured a rich set of differences for the basis of our study.We compare the structural analyses and argue that most of the disagreements between them are attributable to the fact that the listeners paid attention to different musical features. Following the chain of causation backwards, we identify three more ultimate sources of disagreement: differences in the commitments made at the outset of a piece regarding what constitutes a fundamental structural unit, differences in the information each listener had about the performances, and differences in the analytical expectations of the listeners.


Author(s):  
Lilly Husbands

Throughout his career, New York-based experimental filmmaker and animator Jeff Scher has created animated works that are in dialogue with the diary film tradition in avant-garde cinema. Scher uses his distinctive single-frame rotoscope and collage animation technique to investigate the selective nature of memory and to celebrate the moments that constitute everyday life. Scher’s animated trilogy, You Won’t Remember This (2007), You Won’t Remember This Either (2009), and You Might Remember This (2011), depicts a series of everyday moments in the early childhoods of his two sons Buster and Oscar. The trilogy is centred on the mnemonic phenomenon that is referred to in developmental and cognitive psychology as childhood amnesia, which has presented problems for the philosophy of memory since John Locke first investigated the roles of memory and consciousness in the constitution of the identity of the self. Scher’s three portraits invite spectators to reflect on the mnemonic imbalance that is specific to this particular temporal situation—where the parent is able to remember what the child will ultimately forget—in both a distilled and heightened way. This paper investigates the ways in which the rotoscope collage technique employed by Scher in the You Won’t Remember This trilogy not only endows the works with a special capacity to emphasise the universal nature of childhood amnesia but also, conversely, resembles the phenomenological experience of remembering itself.


Music ◽  
2019 ◽  
Author(s):  
Thomas Christensen

Tonality is a ubiquitous term in musical discourse as indispensable as it is obfuscating. Typically, the term tonality (and more generally, “tonal music”) references the pitch-centric “common-practice” language of the transposable major and minor key system within which most classical music has been composed in the West from at least the mid-17th century through the early 20th century. Many theorists have highlighted certain empirical features of melody or harmony as being particularly characteristic or even essential to the tonal system (e.g., the content and structure of the diatonic scale, hierarchies of scale degrees and chord functions, or the cadence in defining or stabilizing tonal centers). At the same time, many theorists have emphasized the psychological power of tonal music for evoking strong affective responses from listeners by arousing strong expectations of tonal behavior that may be realized, delayed, or even thwarted. Clearly, then, any study of tonality needs to take into account the varying and often conflicting ways the concept is understood and used by given writers. But the concept of tonality has also been useful to musicologists for constructing evolutionary models of musical development while also describing—and contrasting—other musical styles and historical languages of music that do not always follow the norms of Western “common-practice” music. Particularly important in this regard is the chromatic language of many late-19th- and early-20th-century composers that is thought to have extended, deviated from, or even negated normative tonal syntax. Here Wagner’s use of chromaticism and extended modulation is usually cited as the progenitor of this process, one that is seen by many of these same observers to have led in the 20th century to the gradual dissolution of classical tonality in favor of a non-hierarchic kind of pitch organization, termed by neologisms such as “suspended tonality,” “post-tonality,” and perhaps most conventionally, “atonality.” Of course, tonality did not pass away; it continued to thrive as a common musical language through the 20th century, particularly in popular music idioms, even as it evolved into numerous dialects and hybrid forms within our globalized and digitalized musical marketplace. Yet the persistence of this myth of tonal evolution and devolution in Western histories of music suggests how high the stakes are in defining the content and perimeters of tonality. Tonality seems to be simultaneously an object and an ideal that continues to exert unparalleled influence—and not a little anxiety—to this day.


2018 ◽  
Vol 1 ◽  
pp. 205920431878776
Author(s):  
Isabel Cecilia Martínez

This article investigates the perception of constituent linear structures of tonal musical pieces, using a divided attention paradigm combined with a click-detection technique. Two experiments were run so as to test whether the boundary of a linear constituent appears as a focal point in the perception of musical structure. In Experiment 1, musicians and non-musicians listened to open foreground prolongations in phrases with clicks located at different points of their constituent structures. Significant differences in response times were found that depended on click position in relation to the boundary; participants were faster in detecting clicks at constituent boundaries, and slower for clicks located before boundaries, with no effect of rhythmic factors. Experiment 2 used the same experimental design to explore perception of open linear foreground prolongations, with the assumption that an effect of branching (left to right, or vice versa) could orient attention differently to the boundary region. Results were similar to those of Experiment 1. Overall, the evidence supports the idea that linear constituency is a significant feature of the perception of tonal musical structure. Dominant events become cognitive reference points to which the focus of attention is allocated, and subordinate, dependent events that are associated to the former, orient expectations of continuation and/or closure.


2017 ◽  
Vol 41 (S1) ◽  
pp. s304-s304
Author(s):  
M.T. Sindelar ◽  
C. Meini

Since birth infants are active and communicative partners engaged in protoconversations with caretakers. Motherese, the simplified language adults spontaneously use with infants, has a musical structure. We believe that for developmental and evolutionistic reasons music is a preferential tool to favor communication and to promote group identity. We carried on a musical experience with a group of autistic (ASD) children aged 5 to 7 years. Each child participated at their school with 10 typically developed classmates and their teachers. Our ASD children love music and enjoy playing and singing. With music, they overcome some communicative and social difficulties. Their bodily posture changed with music, facilitating joint attention and improvement of verbal language. When singing children learned linguistic skills, they ameliorated vowels’ pronunciation and understood how a question and an answer differ in melodic contour. Taking into account the unique sensory motor profile of each ASD child, we proposed rhythmic music with high proprioceptive input (for under-reactive children) and smooth and calming music for avoidant and easily overwhelming children in order to ameliorate intentionality and enlarge circles of communication. A combination of semistructured and spontaneous activity is the main components of our approach, which has both therapeutic and educational impacts. In the musical group, all the ASDs appeared to be more attentive, motivated, better performing and able to teach their acquired skills to their peers. Typical peers interact more with children with ASD with music. We consider this very helpful in the inclusion of ASD children in a school setting.Disclosure of interestThe authors have not supplied their declaration of competing interest.


Author(s):  
Arianna Autieri

The central aim of this paper is to show the similarities of some stylistic features of A Portrait of the Artist as a Young Man with musical code. A second purpose is to verify how these musical features are echoed in “Sirens”. In order to initially describe the common properties of language and music and to define how their acoustic and rhythmic similarities are relevant in written texts, the paper will draw on the theories of the Science of Rhythm – a non-academic discipline that has influenced many modernist writings, and also studied the common rhythmic features of music and language. After detailing a musical method for the analysis of the linguistic texture of written prose, I focus on the first chapter of A Portrait. Hence, I identify the musical characteristics of the novel’s style through a comparison between some Joycean scholars’ theories on music in A Portrait and the principles of the Science of Rhythm. Finally, a few examples of the musical language in “Sirens” will provide a benchmark for a comparison with A Portrait.


2004 ◽  
Vol 21 (4) ◽  
pp. 457-498 ◽  
Author(s):  
Stephen McAdams

Recent work on "musical forces" asserts that experienced listeners of tonal music not only talk about music in terms used to describe physical motion, but actually experience musical motion as if it were shaped by quantifiable analogues of physical gravity, magnetism, and inertia. This article presents a theory of melodic expectation based on that assertion, describes two computer models of aspects of that theory, and finds strong support for that theory in comparisons of the behavior of those models with the behavior of participants in several experiments. The following summary statement of the theory is explained and illustrated in the article: Experienced listeners of tonal music expect completions in which the musical forces of gravity, magnetism, and inertia control operations on alphabets in hierarchies of embellishment whose stepwise displacements of auralized traces create simple closed shapes. A "single-level" computer program models the operation of these musical forces on a single level of musical structure. Given a melodic beginning in a certain key, the model not only produces almost the same responses as experimental participants, but it also rates them in a similar way; the computer model gives higher ratings to responses that participants sing more often. In fact, the completions generated by this model match note-for-note the entire completions sung by participants in several psychological studies as often as the completions of any one of those participants matches those of the other participants. A "multilevel" computer program models the operation of these musical forces on multiple hierarchical levels. When the multilevel model is given a melodic beginning and a hierarchical description of its embellishment structure (i.e., a Schenkerian analysis of it), the model produces responses that reflect the operation of musical forces on all the levels of that hierarchical structure. Statistical analyses of the results of a number of experiments test hypotheses arising from the computer models' algorithm (S. Larson, 1993a) for the interaction of musical forces as well as from F. Lerdahl's similar (1996) algorithm. Further statistical analysis contrasts the explanatory power of the theory of musical forces with that of E. Narmour's (1990, 1992) implication-realization model. The striking agreement between computer-generated responses and experimental results suggests that the theory captures some important aspects of melodic expectation. Furthermore, the fact that these data can be modeled well by the interaction of constantly acting but contextually determined musical forces gives support to the idea that we experience musical motions metaphorically in terms of our experience of physical motions.


Sign in / Sign up

Export Citation Format

Share Document