music patterns
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 1)

2020 ◽  
Vol 9 (2) ◽  
Author(s):  
Abdullah Ali ◽  
Jeanette Azzaretto ◽  
Katie Moltz

The rise of popular music on a global scale has prompted researchers to predict standard features for creating the next hit songs. Previous studies have explored various acoustical/audio features and their relations to top-charting songs but fail to include the artists' voice in determining popular music patterns. As a result, this study had used a trend analysis to find consistent patterns over the selected period (1980-2019) by analyzing five distinct vocal features: vowel corruption, pitch, intensity, number of pulses, and voicing. Upon analyzation, a general increase in vowel corruption and a formant difference in vowels were observed. A stagnant level in intensity and extreme variation in pitch was also noted. Overall, this study was one of the first to find accurate trends, including vocal features in hit song prediction research. Among various implications, one would be introducing a new area of study regarding singing in contemporary music. 


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Ana Filipa Teixeira Borges ◽  
Mona Irrmischer ◽  
Thomas Brockmeier ◽  
Dirk J. A. Smit ◽  
Huibert D. Mansvelder ◽  
...  

AbstractThe pleasure of music listening regulates daily behaviour and promotes rehabilitation in healthcare. Human behaviour emerges from the modulation of spontaneous timely coordinated neuronal networks. Too little is known about the physical properties and neurophysiological underpinnings of music to understand its perception, its health benefit and to deploy personalized or standardized music-therapy. Prior studies revealed how macroscopic neuronal and music patterns scale with frequency according to a 1/fα relationship, where a is the scaling exponent. Here, we examine how this hallmark in music and neuronal dynamics relate to pleasure. Using electroencephalography, electrocardiography and behavioural data in healthy subjects, we show that music listening decreases the scaling exponent of neuronal activity and—in temporal areas—this change is linked to pleasure. Default-state scaling exponents of the most pleased individuals were higher and approached those found in music loudness fluctuations. Furthermore, the scaling in selective regions and timescales and the average heart rate were largely proportional to the scaling of the melody. The scaling behaviour of heartbeat and neuronal fluctuations were associated during music listening. Our results point to a 1/fresonance between brain and music and a temporal rescaling of neuronal activity in the temporal cortex as mechanisms underlying music appreciation.


2019 ◽  
Author(s):  
Andre Du Bois ◽  
Rodrigo Ribeiro

HMusic is a domain specific language based on music patterns that can be used to write music and live coding. The main abstractions provided by the language are patterns and tracks. Code written in HMusic looks like patterns and multi-tracks available in music sequencers, drum machines and DAWs. HMusic provides primitives to design and combine patterns generating new patterns. The objective of this paper is to extend the original design of HMusic to allow effects on tracks. We describe new abstractions to add effects on individual tracks and in groups of tracks, and how they influence the combinators for track composition and multiplication. HMusic allows the live coding of music and, as it is embedded in the Haskell functional programming language, programmers can write functions to manipulate effects on the fly. The current implementation of the language is compiled into Sonic Pi [1], and we describe how the compiler’s back-end was modified to support the new abstractions for effects. HMusic can be and can be downloaded from [2].


2017 ◽  
Vol 29 (1) ◽  
pp. 137-145 ◽  
Author(s):  
Tito Pradhono Tomo ◽  
◽  
Alexander Schmitz ◽  
Guillermo Enriquez ◽  
Shuji Hashimoto ◽  
...  

[abstFig src='/00290001/13.jpg' width='245' text='Wayang robot' ] This paper proposes a way to protect endangered wayang puppet theater, an intangible cultural heritage from Indonesia, by turning a robot into a puppeteer successor. We developed a seven degrees-of-freedom (DOF) manipulator to actuate the sticks attached to the wayang puppet body and hands. The robot can imitate 8 distinct human puppeteer’s manipulations. Furthermore, we developed a gamelan music pattern recognition, towards a robot that can perform based on the gamelan music. In the offline experiment, we extracted energy (time domain), spectral rolloff, 13 Mel-frequency cepstral coefficients (MFCCs), and the harmonic ratio from 5 s long clips, every 0.025 s, with a window length of 1 s, for a total of 2576 features. Two classifiers (3 layers feed-forward neural network (FNN) and multi-class Support Vector Machine (SVM)) were compared. The SVM classifier outperformed the FNN classifier with a recognition rate of 96.4% for identifying the three different gamelan music patterns.


2015 ◽  
Vol 193 (4) ◽  
pp. 1159-1162 ◽  
Author(s):  
Ishai Ross ◽  
Paul Womble ◽  
Jun Ye ◽  
Susan Linsell ◽  
James E. Montie ◽  
...  

2012 ◽  
Vol 31 (2) ◽  
pp. 171-185 ◽  
Author(s):  
Moritz Lehne ◽  
Martin Rohrmeier ◽  
Donald Gollmann ◽  
Stefan Koelsch

In tonal music, patterns of tension and resolution form one of the core principles evoking emotions. The experience of musical tension and resolution depends on various features of the music (e.g., dynamics, agogics, melody, and harmony); however, the relative contribution of different features to the experience of tension is less clear. To investigate the influence of different features on subjectively experienced musical tension, we compared continuous ratings of felt musical tension for original and modified versions of two piano pieces by Mendelssohn and Mozart. Modifications included versions without dynamics and without agogics as well as versions in which the music was reduced to its melodic, harmonic, or outer voice components. Additionally, we compared tension ratings with a loudness model. Tension ratings for versions without dynamics, versions without agogics and without dynamics, and outer voice reductions correlated highly with ratings for the original versions for both pieces. Tension rating correlations between melodic or harmonic reductions and original versions, as well as loudness and original ratings, differed between pieces and appeared to depend on the relative importance of the feature in the respective piece. In addition, qualitative analyses suggested that felt tension and resolution depend on phrase structure, local harmonic implications, and global syntactic structures of the pieces. Altogether, results indicate that discarding expressive features such as dynamics and agogics largely preserves tension-resolution patterns of the music, whereas the contributions of harmonic and melodic structure depend on the way in which they are employed in the composition.


2011 ◽  
Vol 13 (53) ◽  
pp. 310 ◽  
Author(s):  
MC Bohlin ◽  
SE Widén ◽  
E Sorbring ◽  
SI Erlandsson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document