musical note
Recently Published Documents


TOTAL DOCUMENTS

88
(FIVE YEARS 25)

H-INDEX

9
(FIVE YEARS 1)

2022 ◽  
Vol 9 ◽  
Author(s):  
Carl Hopkins ◽  
Saúl Maté-Cid ◽  
Robert Fulford ◽  
Gary Seiffert ◽  
Jane Ginsborg ◽  
...  

Performing music or singing together provides people with great pleasure. But if you are deaf (or hard of hearing) it is not always possible to listen to other musicians while trying to sing or play an instrument. It can be particularly difficult to perceive different musical pitches with a hearing aid or other hearing-assistance device. However, the human body can transmit musical sounds to the brain when vibrations are applied to the skin. In other words, we can feel music. Our research has identified a safe way for deaf people to hear musical notes through the skin of their hands and feet. We have shown that vibration allows people to safely feel music on the skin. This approach allows people to identify a musical note as being higher or lower in pitch than other notes, and it helps musicians to play music together.


2022 ◽  
pp. 107754632110421
Author(s):  
Beena Limkar ◽  
Gautam Chandekar

Dynamic analysis of Sitar, an Indian string instrument, is important for better understanding of the instrument behavior during performance. Sitar has complex geometry, and most of its components have anisotropic material properties, which generate a lot of challenges in performing numerical modal analysis. Considering this, an experimental approach of operational modal analysis (OMA) is performed on Sitar to extract its natural frequencies using the Stochastic Subspace Identification method. Hammer or shaker excitation required for conventional experimental modal analysis (EMA) has huge limitations of using harder hammer tips and high magnitude force as the instrument is delicate. However, to validate OMA results, EMA is performed with extreme care using an instrumented hammer with soft tip and with a very low excitation force. PolyMAX algorithm is used in EMA. It is observed that most of the correlated OMA and EMA modes lie in the audible frequency range. The maximum absolute percent error observed for these frequencies is 2.14%. All the modes obtained in OMA are significant as the string excitation simulate close to the real-life performing situation. Most of these modes map to musical note frequencies. Considering the detrimental effect of excitation required for EMA, OMA is a recommended method for extracting modal characteristics of Sitar.


2021 ◽  
pp. 030573562110316
Author(s):  
Elena Saiz-Clar ◽  
Miguel Ángel Serrano ◽  
José Manuel Reales

The relationship between parameters extracted from the musical stimuli and emotional response has been traditionally approached using several physical measures extracted from time or frequency domains. From time-domain measures, the musical onset is defined as the moment in that any musical instrument or human voice issues a musical note. The onsets’ sequence in the performance of a specific musical score creates what is known as the onset curve (OC). The influence of the structure of OC on the emotional judgment of people is not known. To this end, we have applied principal component analysis on a complete set of variables extracted from the OC to capture their statistical structure. We have found a trifactorial structure related to activation and valence dimensions of emotional judgment. The structure has been cross-validated using different participants and stimuli. In this way, we propose the factorial scores of the OC as a reliable and relevant piece of information to predict the emotional judgment of music.


Author(s):  
Mina Mounir ◽  
Peter Karsmakers ◽  
Toon van Waterschoot

AbstractIf music is the language of the universe, musical note onsets may be the syllables for this language. Not only do note onsets define the temporal pattern of a musical piece, but their time-frequency characteristics also contain rich information about the identity of the musical instrument producing the notes. Note onset detection (NOD) is the basic component for many music information retrieval tasks and has attracted significant interest in audio signal processing research. In this paper, we propose an NOD method based on a novel feature coined as Normalized Identification of Note Onset based on Spectral Sparsity (NINOS2). The NINOS2 feature can be thought of as a spectral sparsity measure, aiming to exploit the difference in spectral sparsity between the different parts of a musical note. This spectral structure is revealed when focusing on low-magnitude spectral components that are traditionally filtered out when computing note onset features. We present an extensive set of NOD simulation results covering a wide range of instruments, playing styles, and mixing options. The proposed algorithm consistently outperforms the baseline Logarithmic Spectral Flux (LSF) feature for the most difficult group of instruments which are the sustained-strings instruments. It also shows better performance for challenging scenarios including polyphonic music and vibrato performances.


2021 ◽  
pp. 030573562110133
Author(s):  
Lucas Lörch

Chunking is defined as information compression by means of encoding meaningful units. To advance the understanding of chunking in musical memory, the present study tested characteristics of melodic sequences that might enable a parsimonious memory representation, namely, the presence of a clear tonal context and of melodic cells with clear labels. Musical note symbols, which formed either triads (Experiment 1) or cadences (Experiment 2), were presented visually and sequentially to musically experienced participants for immediate serial recall. The melodic sequences were varied on the within-participant factors list length (long vs. short list) and tonal structure (chunking-supportive vs. chunking-obstructive). Chunking-supportive sequences contained tones from a single diatonic key that formed melodic cells with a clear label, such as “C major triad”. Transitional errors showed that participants grouped notes into melodic cells. Mixed logistic regression modeling revealed that recall was more accurate in chunking-supportive sequences and that this advantage was more pronounced for more experienced participants in the long list length condition of Experiment 2. The findings suggest that a clear tonal context and melodic cells with clear labels benefit chunking in melodic processing, but that the subtleties of the process are additionally influenced by type, size, and number of melodic cells.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Marcin Szeliga ◽  
Zsolt Kasztovszky ◽  
Grzegorz Osipowicz ◽  
Veronika Szilágyi

Abstract The inflow of the Carpathian obsidian into the areas on the northern side of the Carpathians and the Sudetes is confirmed as early as in the Palaeolithic. However, its greatest intensity occurred in the Early Neolithic, i. e. in the late 6th and in the first half of 5th millennia BC. During that period, the phenomenon was closely related with the development of the Danubian cultural groups in the upper Vistula river basin, including especially Linear Pottery culture (LBK) and Malice Culture. The constant presence of this raw material products in mentioned areas is documented from the classical (musical-note) phase of LBK, constituting one of the most expressive pieces of evidence of permanent and intense intercultural contacts with communities of the northern Carpathian Basin. This phenomenon has been repeatedly emphasized in the literature. One of the most numerous LBK obsidian inventories in the upper Vistula river basin was obtained at site 6 in Tominy, located in southern Poland, in the non-loess zone of the Sandomierz Upland northern foreground. The above-mentioned collection, its non-destructive elemental analysis, using Prompt Gamma Activation Analysis (PGAA) and also traceological analysis, is the subject of this article. The results supplement the published data to a significant extent, simultaneously providing partial verification and updating of the current state of knowledge on the basic issues related to the Early Neolithic obsidian inflow into areas located North of the Carpathians, including primarily the origin of the raw material, the scale of its processing and distribution ways, as well as the range of its use by the LBK communities.


Author(s):  
ِAnsam Nazar Younis ◽  
Fawzia Mahmood Ramo

Music is a universal language that does not require an interpreter, where feelings and sensitivities are united, regardless of the different peoples and languages, The proposed system consists of two main stages: the process of extracting important properties using the linear discrimination analysis (LDA) This step is carried out after the initial treatment process using various procedures to remove musical lines, The second stage describes the recognition process using the bat algorithm, which is one of the metaheuristic algorithms after modifying the bat algorithm to obtain better discriminating results. The proposed system was supported by parallel implementation using the (Developed Bat Algorithm DBA), which increased the speed of implementation significantly. The method was applied to 1250 different images of musical notes. The proposed system was implemented using MATLAB R2016a, Work was done on a Windows10 Processor OS (Intel ® Core TM i5-7200U CPU @ 2.50GHZ 2.70GHZ) computer.


2020 ◽  
pp. 555-560
Author(s):  
Hariyanto Hariyanto ◽  
Suyanto Suyanto

Music is basically a sound arranged in such a way to produce a harmonious and rhythmic sound. The basis of music is a tone, which is a natural sound and has different frequencies for each sound. Each constant sound represents a tone. The tones can also be represented in a chord. Humans are capable of creating a sound or imitating a tone from other human beings, but they are naturally unable to represent them into musical notation without musical instruments. This research addresses a model of Hum-to-Chord (H2C) conversion using a Chroma Feature (CF) to extract the characteristics and a Hidden Markov Model (HMM) to classify them. A 10-fold cross-validating shows that the best model is represented by the chroma coefficients of 55 and HMM with a codebook of 16, which gives an average accuracy of 94.83%. Examining on a 30% testing set proves that the best model has a high accuracy of up to 97.78%. Most errors come from the chords with both high and low octaves since they are unstable. Compared to a similar model called musical note classification (MNC), the proposed H2C model performs better in terms of both accuracy and complexity.


Sign in / Sign up

Export Citation Format

Share Document