musical sequence
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 8)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
pp. 1-12
Author(s):  
Nahum Rangel ◽  
Salvador Godoy-Calderon ◽  
Hiram Calvo

Artificial music tutors are needed for assisting a performer during his/her practice time whenever a human tutor is not available. But for these artificial tutors to be intelligent and fulfill the role of a music tutor, they have to be able to identify errors made by the performer while playing a musical sequence. This task is not a trivial one, since all musical activities are considered as open-ended domains. Therefore, not only there is no unique correct way of performing a musical sequence, but also the analysis made by the tutor has to consider the development level of the performer, the difficulty level of the performed musical sequence, and many other variables. This paper describes an ongoing research that uses cascading connected layers of symbolic processing as the core of a human-performed error identification and characterization module able to overcome the complexity of the studied open-ended domain.


2021 ◽  
Vol 39 (2) ◽  
pp. 145-159
Author(s):  
Laure-Hélène Canette ◽  
Philippe Lalitte ◽  
Barbara Tillmann ◽  
Emmanuel Bigand

Conceptual priming studies have shown that listening to musical primes triggers semantic activation. The present study further investigated with a free semantic evocation task, 1) how rhythmic vs. textural structures affect the amount of words evoked after a musical sequence, and 2) whether both features also affect the content of the semantic activation. Rhythmic sequences were composed of various percussion sounds with a strong underlying beat and metrical structure. Textural sound sequences consisted of blended timbres and sound sources evolving over time without identifiable pulse. Participants were asked to verbalize the concepts evoked by the musical sequences. We measured the number of words and lemmas produced after having listened to musical sequences of each condition, and we analyzed whether specific concepts were associated with each sequence type. Results showed that more words and lemmas were produced for textural sound sequences than for rhythmic sequences and that some concepts were specifically associated with each musical condition. Our findings suggest that listening to musical excerpts emphasizing different features influences semantic activation in different ways and extent. This might possibly be instantiated via cognitive mechanisms triggered by the acoustic characteristics of the excerpts as well as the perceived emotions.


2021 ◽  
Author(s):  
Minju Kim ◽  
Adena Schachner

Listening to music activates representations of movement and social agents. Why? We ask whether high-level causal reasoning about how music was generated can lead people to link musical sounds with animate agents. To test this, we asked whether people (N=60) make flexible inferences about whether an agent caused musical sounds, integrating information from the sounds’ timing and from the visual context in which it was produced. Using a 2x2 within-subject design, we found evidence of causal reasoning: In a context where producing a musical sequence would require self-propelled movement, people inferred that an agent had been present causing the sounds. When the context provided an alternative possible explanation, this ‘explained away’ the agent, reducing the tendency to infer an agent was present for the same acoustic stimuli. People can use causal reasoning to infer whether an agent produced musical sounds, suggesting that high-level cognition can link music with social concepts.


Author(s):  
Olena Afonina

The purpose of the article is to analyze the musical fabric of the ballet "Lady with Camellias" (Kyiv). The methodology is based on the use of general scientific methods and approaches to solve the set tasks: comparative analysis helped to reveal originality in the selection of music; musicological analysis showed the expediency of forming a musical line in accordance with ballet drama. The scientific novelty lies in the analysis of the music of the ballet "Lady of the Camellias". Conclusions. Ballet – drama is revealed by music organically embodied in choreography. The musical basis of the ballet is subordinated to the plot-semantic conflict drama. The selection of music for the ballet is varied. Although one can notice certain patterns in the metro-rhythmic and emotional organization. Display of the salon atmosphere of the 19th-century music by L. Beethoven, which serves as a kind of refrain. Stylistically similar musical fragments and even repetitions of selected works make it possible to trace the process of a gradual increase in dramatic action. The music of other composers (J. Brahms, I. Pachelbel, I. Stravinsky, G. Fore, E. Elgar) helps to reveal emotionally tense moments.


Author(s):  
Gena R. Greher

In the activity outlined in this chapter, students will explore some basic functions of the Scratch programming language to learn coding through music. As an introductory exercise, using a pre-programmed musical puzzle, students will sequence existing melodic phrases as well as program two missing phrases to complete the musical sequence. They will explore the internal MIDI functions, play note blocks, and control blocks of Scratch. By engaging in this activity, students can learn critical listening skills through structural/macro analysis, such as chunking, structural dictation, form, pattern recognition, and the ability to hear the slight differences that occur in similar musical phrases.


2019 ◽  
pp. 030573561987580
Author(s):  
Daniel Andre Ignacio ◽  
David R Gerkens ◽  
Erick Ryan Aguinaldo ◽  
Dina Arch ◽  
Ruben Barajas

The present study used the affective priming paradigm to understand interference and facilitation effects of cross-modal emotional interactions; specifically, the ability for five-chord progressions to affect processing efficiency of visual targets. Twenty-five-chord progressions were selected based on the degree that they fulfilled participants’ automatically formulated expectations of how each musical sequence should sound. The current study is an extension of previous research that revealed the influence of music-like stimuli on the identification of valence in emotional words. The potential of music-like stimuli to affect emotional processing, as measured by the efficiency of valence categorization, was assessed across two experiments. Experiment 1 presented word-targets, whereas Experiment 2 presented facial expressions. The processing of words and faces primed with affectively congruent chord-progressions was facilitated, whereas the processing of words primed with affectively incongruent chord-progressions was not. Incongruent pairings with faces engendered interference effects and the second experiment revealed a predictive relationship between behavioral processing speed and self-ratings of anxiety. The processing of word-targets was compared to facial expressions in the presence and absence of music. The results suggest that short musical sequences influence individuals’ emotional processing, which could inform intervention research into how to attenuate potential attention biases.


2019 ◽  
Author(s):  
Cláudio Gomes ◽  
Josue Da Silva ◽  
Marco Leal ◽  
Thiago Nascimento

At every moment, innumerable emotions can indicate and provide questions about daily attitudes. These emotions can interfere or stimulate different goals. Whether in school, home or social life, the environment increases the itinerant part of the process of attitudes. The musician is also passive of these emotions and incorporates them into his compositions for various reasons. Thus, the musical composition has innumerable sources, for example, academic formation, experiences, influences and perceptions of the musical scene. In this way, this work develops the mAchine learning Algorithm Applied to emotions in melodies (3A). The 3A recognizes the musician’s melodies in real time to generate accompaniment melody. As input, The 3A used MIDI data from a synthesizer to generate accompanying MIDI output or sound file by the programming language Chuck. Initially in this work, it is using the Gregorian modes for each intention of composition. In case, the musician changes the mode or tone, the 3A has an adaptation to continuing the musical sequence. Currently, The 3A uses artificial neural networks to predict and adapt melodies. It started from mathematical series for the formation of melodies that present interesting results for both mathematicians and musicians.


2019 ◽  
Vol 48 (6) ◽  
pp. 836-845
Author(s):  
Lisa Thorpe ◽  
Margaret Cousins ◽  
Ros Bramwell

The phoneme monitoring task is a musical priming paradigm that demonstrates that both musicians and non-musicians have gained implicit understanding of prevalent harmonic structures. Little research has focused on implicit music learning in musicians and non-musicians. This current study aimed to investigate whether the phoneme monitoring task would identify any implicit memory differences between musicians and non-musicians. It focuses on both implicit knowledge of musical structure and implicit memory for specific musical sequences. Thirty-two musicians and non-musicians (19 female and 13 male) were asked to listen to a seven-chord sequence and decide as quickly as possible whether the final chord ended on the syllable /di/ or /du/. Overall, musicians were faster at the task, though non-musicians made more gains through the blocks of trials. Implicit memory for musical sequence was evident in both musicians and non-musicians. Both groups of participants reacted quicker to sequences that they had heard more than once but showed no explicit knowledge of the familiar sequences.


2017 ◽  
Author(s):  
Haley Elisabeth Kragness ◽  
Laurel Trainor

Previous work has shown that musicians tend to slow down as they approach phrase boundaries (phrase-final lengthening). In the present experiments, we used a paradigm from the action perception literature, the dwell time paradigm (Hard, Recchia, & Tversky, 2011), to investigate whether participants engage in phrase boundary lengthening when self-pacing through musical sequences. When participants used a key press to produce each successive chord of Bach chorales, they dwelled longer on boundary chords than non-boundary chords in both the original chorales and atonal manipulations of the chorales. When a novel musical sequence was composed that controlled for metrical and melodic contour cues to boundaries, the dwell time difference between boundaries and non-boundaries was greater in the tonal condition than in the atonal condition. Furthermore, similar results were found for a group of non-musicians, suggesting that phrase-final lengthening in musical production is not dependent on musical training and can be evoked by harmonic cues.


Sign in / Sign up

Export Citation Format

Share Document