scholarly journals Which Melodic Universals Emerge from Repeated Signaling Games? A Note on Lumaca and Baggio (2017)

2018 ◽  
Vol 24 (2) ◽  
pp. 149-153 ◽  
Author(s):  
Andrea Ravignani ◽  
Tessa Verhoef

Music is a peculiar human behavior, yet we still know little as to why and how music emerged. For centuries, the study of music has been the sole prerogative of the humanities. Lately, however, music is being increasingly investigated by psychologists, neuroscientists, biologists, and computer scientists. One approach to studying the origins of music is to empirically test hypotheses about the mechanisms behind this structured behavior. Recent lab experiments show how musical rhythm and melody can emerge via the process of cultural transmission. In particular, Lumaca and Baggio (2017) tested the emergence of a sound system at the boundary between music and language. In this study, participants were given random pairs of signal-meanings; when participants negotiated their meaning and played a “game of telephone” with them, these pairs became more structured and systematic. Over time, the small biases introduced in each artificial transmission step accumulated, displaying quantitative trends, including the emergence, over the course of artificial human generations, of features resembling properties of language and music. In this Note, we highlight the importance of Lumaca and Baggio's experiment, place it in the broader literature on the evolution of language and music, and suggest refinements for future experiments. We conclude that, while psychological evidence for the emergence of proto-musical features is accumulating, complementary work is needed: Mathematical modeling and computer simulations should be used to test the internal consistency of experimentally generated hypotheses and to make new predictions.

Author(s):  
Anthony Brandt ◽  
L. Robert Slevc ◽  
Molly Gebrian

Language and music are readily distinguished by adults, but there is growing evidence that infants first experience speech as a special type of music. By listening to the phonemic inventory and prosodic patterns of their caregivers’ speech, infants learn how their native language is composed, later bootstrapping referential meaning onto this musical framework. Our current understanding of infants’ sensitivities to the musical features of speech, the co-development of musical and linguistic abilities, and shared developmental disorders, supports the view that music and language are deeply entangled in the infant brain and modularity emerges over the course of development. This early entanglement of music and language is crucial to the cultural transmission of language and children’s ability to learn any of the world’s tongues.


Author(s):  
Andrea Ravignani ◽  
Tessa Verhoef

Language and music are peculiar human behaviors. We spend a large portion of our lives speaking, reading, processing speech, performing music and listening to tunes. At the same time, we still know very little as to why and how these structured behaviors emerged in our species. For the particular case of music, the mystery is even greater than language. Music is a widespread human behavior which does not seem to confer any evolutionary advantage. A possible approach to study the origins of music is to hypothesize and empirically test the mechanisms behind this structured behavior [1, 2]. For language, potential mechanisms were first tested in-silico [3, 4], showing how random pairs of signal-meanings become more structured and systematic when artificial agents play a ‘game of telephone’ with them. These results were replicated with human participants evolving a language-like system [5], confirming the importance of computer simulations in testing hypotheses on the cultural evolution of human behavior. Finally, recent work applied this approach to musical rhythm [6], showing that musical structures can indeed emerge via cultural transmission. A recent paper in Artificial Life adopted this methodological approach, testing the emergence of a sound system at the boundary between music and language [7]. Similar to communication systems found in humans, other animals and in-silico experiments, a meaning space was paired with a signal space. The meaning space coincided with a set of pictures showing different facial emotional expressions. The signal space was a set of 5-note patterns. Crucially, the experimenters randomly paired meanings to signals, which were in turn randomly structured. These random pairings of emotional expressions and random note sequences were then used in signaling games, where pairs of participants used note sequences to communicate emotional states. The resulting signal-meaning pairs, with all their human-introduced variations, were then used in new signaling games with new participants. Over time, the small biases introduced in each artificial transmission step accumulated, displaying quantitative trends. In particular, the authors found the emergence, over the course of artificial human generations, of features resembling some properties of language and music.


2017 ◽  
Author(s):  
Andrea Ravignani ◽  
Tessa Verhoef

Language and music are peculiar human behaviors. We spend a large portion of our lives speaking, reading, processing speech, performing music and listening to tunes. At the same time, we still know very little as to why and how these structured behaviors emerged in our species. For the particular case of music, the mystery is even greater than language. Music is a widespread human behavior which does not seem to confer any evolutionary advantage. A possible approach to study the origins of music is to hypothesize and empirically test the mechanisms behind this structured behavior [1, 2]. For language, potential mechanisms were first tested in-silico [3, 4], showing how random pairs of signal-meanings become more structured and systematic when artificial agents play a ‘game of telephone’ with them. These results were replicated with human participants evolving a language-like system [5], confirming the importance of computer simulations in testing hypotheses on the cultural evolution of human behavior. Finally, recent work applied this approach to musical rhythm [6], showing that musical structures can indeed emerge via cultural transmission. A recent paper in Artificial Life adopted this methodological approach, testing the emergence of a sound system at the boundary between music and language [7]. Similar to communication systems found in humans, other animals and in-silico experiments, a meaning space was paired with a signal space. The meaning space coincided with a set of pictures showing different facial emotional expressions. The signal space was a set of 5-note patterns. Crucially, the experimenters randomly paired meanings to signals, which were in turn randomly structured. These random pairings of emotional expressions and random note sequences were then used in signaling games, where pairs of participants used note sequences to communicate emotional states. The resulting signal-meaning pairs, with all their human-introduced variations, were then used in new signaling games with new participants. Over time, the small biases introduced in each artificial transmission step accumulated, displaying quantitative trends. In particular, the authors found the emergence, over the course of artificial human generations, of features resembling some properties of language and music.


Author(s):  
Julia Fritz ◽  
Gesine Dreisbach

The idea that conflicts are aversive signals recently has gained strong support by both physiological as well as psychological evidence. However, the time course of the aversive signal has not been subject to direct investigation. In the present study, participants had to judge the valence of neutral German words after being primed with conflict or non-conflict Stroop stimuli in three experiments with varying SOA (200 ms, 400 ms, 800 ms) and varying prime presentation time. Conflict priming effects (i.e., increased frequencies of negative judgments after conflict as compared to non-conflict primes) were found for SOAs of 200 ms and 400 ms, but absent (or even reversed) with a SOA of 800 ms. These results imply that the aversiveness of conflicts is evaluated automatically with short SOAs, but is actively counteracted with prolonged prime presentation.


2008 ◽  
Vol 25 (4) ◽  
pp. 357-368 ◽  
Author(s):  
ANIRUDDH D. PATEL ◽  
MEREDITH WONG ◽  
JESSICA FOXTON ◽  
ALIETTE LOCHY ◽  
ISABELLE PERETZ

TO WHAT EXTENT DO MUSIC and language share neural mechanisms for processing pitch patterns? Musical tone-deafness (amusia) provides important evidence on this question. Amusics have problems with musical melody perception, yet early work suggested that they had no problems with the perception of speech intonation (Ayotte, Peretz, & Hyde, 2002). However, here we show that about 30% of amusics from independent studies (British and French-Canadian) have difficulty discriminating a statement from a question on the basis of a final pitch fall or rise. This suggests that pitch direction perception deficits in amusia (known from previous psychophysical work) can extend to speech. For British amusics, the direction deficit is related to the rate of change of the final pitch glide in statements/ questions, with increased discrimination difficulty when rates are relatively slow. These findings suggest that amusia provides a useful window on the neural relations between melodic processing in language and music.


2002 ◽  
Vol 8 (4) ◽  
pp. 311-339 ◽  
Author(s):  
Steve Munroe ◽  
Angelo Cangelosi

The Baldwin effect has been explicitly used by Pinker and Bloom as an explanation of the origins of language and the evolution of a language acquisition device. This article presents new simulations of an artificial life model for the evolution of compositional languages. It specifically addresses the role of cultural variation and of learning costs in the Baldwin effect for the evolution of language. Results show that when a high cost is associated with language learning, agents gradually assimilate in their genome some explicit features (e.g., lexical properties) of the specific language they are exposed to. When the structure of the language is allowed to vary through cultural transmission, Baldwinian processes cause, instead, the assimilation of a predisposition to learn, rather than any structural properties associated with a specific language. The analysis of the mechanisms underlying such a predisposition in terms of categorical perception supports Deacon's hypothesis regarding the Baldwinian inheritance of general underlying cognitive capabilities that serve language acquisition. This is in opposition to the thesis that argues for assimilation of structural properties needed for the specification of a full-blown language acquisition device.


2021 ◽  
pp. 1-17
Author(s):  
Avital Sternin ◽  
Lucy M. McGarry ◽  
Adrian M. Owen ◽  
Jessica A. Grahn

Abstract We investigated how familiarity alters music and language processing in the brain. We used fMRI to measure brain responses before and after participants were familiarized with novel music and language stimuli. To manipulate the presence of language and music in the stimuli, there were four conditions: (1) whole music (music and words together), (2) instrumental music (no words), (3) a capella music (sung words, no instruments), and (4) spoken words. To manipulate participants' familiarity with the stimuli, we used novel stimuli and a familiarization paradigm designed to mimic “natural” exposure, while controlling for autobiographical memory confounds. Participants completed two fMRI scans that were separated by a stimulus training period. Behaviorally, participants learned the stimuli over the training period. However, there were no significant neural differences between the familiar and unfamiliar stimuli in either univariate or multivariate analyses. There were differences in neural activity in frontal and temporal regions based on the presence of language in the stimuli, and these differences replicated across the two scanning sessions. These results indicate that the way we engage with music is important for creating a memory of that music, and these aspects, over and above familiarity on its own, may be responsible for the robust nature of musical memory in the presence of neurodegenerative disorders such as Alzheimer's disease.


Author(s):  
Mirdza Paipare

Nowadays speech and language development is actual problem. This research was conducted to find out how music could help in solving this problem and to find out how connected music and language processes are in our brain. Language and music share a lot of similarities in terms of neurology, Music Therapy, communication and psychology and their interactions are successfully used in these sciences. Aim of this research was to gather data and scientific basis for music’s ability to improve speech and language. Main findings indicate that this topic has been studied for decades and still hasn’t lost its significance. Music can influence language processes, lessen disorders because it is neurologically and psychologically close to our spoken language. 


Sign in / Sign up

Export Citation Format

Share Document