music signal
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 35)

H-INDEX

6
(FIVE YEARS 3)

Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3077
Author(s):  
Alexander Lerch ◽  
Peter Knees

Over the past two decades, the utilization of machine learning in audio and music signal processing has dramatically increased [...]


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Tianzhuo Gong ◽  
Sibing Sun

The digitization, analysis, and processing technology of music signals are the core of digital music technology. There is generally a preprocessing process before the music signal processing. The preprocessing process usually includes antialiasing filtering, digitization, preemphasis, windowing, and framing. Songs in the popular wav format and MP3 format on the Internet are all songs that have been processed by digital technology and do not need to be digitalized. Preprocessing can affect the effectiveness and reliability of the feature parameter extraction of music signals. Since the music signal is a kind of voice signal, the processing of the voice is also applicable to the music signal. In the study of adaptive wave equation inversion, the traditional full-wave equation inversion uses the minimum mean square error between real data and simulated data as the objective function. The gradient direction is determined by the cross-correlation of the back propagation residual wave field and the forward simulation wave field with respect to the second derivative of time. When there is a big gap between the initial model and the formal model, the phenomenon of cycle jumping will inevitably appear. In this paper, adaptive wave equation inversion is used. This method adopts the idea of penalty function and introduces the Wiener filter to establish a dual objective function for the phase difference that appears in the inversion. This article discusses the calculation formulas of the accompanying source, gradient, and iteration step length and uses the conjugate gradient method to iteratively reduce the phase difference. In the test function group and the recorded music signal library, a large number of simulation experiments and comparative analysis of the music signal recognition experiment were performed on the extracted features, which verified the time-frequency analysis performance of the wave equation inversion and the improvement of the decomposition algorithm. The features extracted by the wave equation inversion have a higher recognition rate than the features extracted based on the standard decomposition algorithm, which verifies that the wave equation inversion has a better decomposition ability.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Qiang Li

In this paper, combined with the partial differential equation music signal smoothing model, a new music signal recognition model is proposed. Experimental results show that this model has the advantages of the above two models at the same time, which can remove noise and enhance music signals. This paper also studies the music signal recognition method based on the nonlinear diffusion model. By distinguishing the flat area and the boundary area of the music signal, a new diffusion coefficient equation is obtained by combining these two methods, and the corresponding partial differential equation is discretized by the finite difference method with numerical solution. The application of partial differential equations in music signal processing is a relatively new topic. Because it can accurately model the music signal, it solves many complicated problems in music signal processing. Then, we use the group shift Fourier transform (GSFT) to transform this partial differential equation into a linear homogeneous differential equation system, and then use the series to obtain the solution of the linear homogeneous differential equation system, and finally use the group shift inverse Fourier transform to obtain the noise frequency modulation time-dependent solution of the probability density function of the interference signal. This paper attempts to use the mathematical method of stochastic differentiation to solve the key problem of the time-dependent solution of the probability density function of noise interference signals and to study the application of random differentiation theory in radar interference signal processing and music signal processing. At the end of the thesis, the application of stochastic differentiation in the filtering processing of music signals is tried. According to the inherent self-similarity of the music signal system and the completeness and stability of the empirical mode decomposition (EMD) algorithm, a new kind of EMD music using stochastic differentiation is proposed for signal filtering algorithm. This improved anisotropic diffusion method can maintain and enhance the boundary while smoothing the music signal. The filtering results of the actual music signal show that the algorithm is effective.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Wei Jiang ◽  
Dong Sun

Digitization and analysis processing technology of music signals is the core of digital music technology. The paper studies the music signal feature recognition technology based on the mathematical equation inversion method, which is aimed at designing a method that can help music learners in music learning and music composition. The paper firstly studies the modeling of music signal and its analysis and processing algorithm, combining the four elements of music sound, analyzing and extracting the characteristic parameters of notes, and establishing the mathematical model of single note signal and music score signal. The single note recognition algorithm is studied to extract the Mel frequency cepstrum coefficient of the signal and improve the DTW algorithm to achieve single note recognition. Based on the implementation of the single note algorithm, we combine the note temporal segmentation method based on the energy-entropy ratio to segment the music score into single note sequences to realize the music score recognition. The paper then goes on to study the music synthesis algorithm and perform simulations. The benchmark model demonstrates the positive correlation of pitch features on recognition through comparative experiments and explores the number of harmonics that should be attended to when recognizing different instruments. The attention network-based classification model draws on the properties of human auditory attention to improve the recognition scores of the main playing instruments and the overall recognition accuracy of all instruments. The two-stage classification model is divided into a first-stage classification model and a second-stage classification model, and the second-stage classification model consists of three residual networks, which are trained separately to specifically identify strings, winds, and percussions. This method has the highest recognition score and overall accuracy.


2021 ◽  
pp. 50-72
Author(s):  
Victor Lazzarini

This chapter introduces and explores some basic aspects of audio and music signal processing. It first looks at analogue signals, developing in good detail the concepts of frequency, phase, and amplitude, supported by some mathematics. Simple manipulation of signals is discussed and its effects on sound waveforms are shown. The key concept of discrete signals, and the discretisation process involved in sampling is introduced. The chapter concludes with the definition of digital audio.


2021 ◽  
Author(s):  
V. N. Aditya Datta Chivukula ◽  
Sri Keshava Reddy Adupala

Machine learning techniques have become a vital part of every ongoing research in technical areas. In recent times the world has witnessed many beautiful applications of machine learning in a practical sense which amaze us in every aspect. This paper is all about whether we should always rely on deep learning techniques or is it really possible to overcome the performance of simple deep learning algorithms by simple statistical machine learning algorithms by understanding the application and processing the data so that it can help in increasing the performance of the algorithm by a notable amount. The paper mentions the importance of data pre-processing than that of the selection of the algorithm. It discusses the functions involving trigonometric, logarithmic, and exponential terms and also talks about functions that are purely trigonometric. Finally, we discuss regression analysis on music signals.


Signals ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 508-526
Author(s):  
Ryoto Ishizuka ◽  
Ryo Nishikimi ◽  
Kazuyoshi Yoshii

This paper describes an automatic drum transcription (ADT) method that directly estimates a tatum-level drum score from a music signal in contrast to most conventional ADT methods that estimate the frame-level onset probabilities of drums. To estimate a tatum-level score, we propose a deep transcription model that consists of a frame-level encoder for extracting the latent features from a music signal and a tatum-level decoder for estimating a drum score from the latent features pooled at the tatum level. To capture the global repetitive structure of drum scores, which is difficult to learn with a recurrent neural network (RNN), we introduce a self-attention mechanism with tatum-synchronous positional encoding into the decoder. To mitigate the difficulty of training the self-attention-based model from an insufficient amount of paired data and to improve the musical naturalness of the estimated scores, we propose a regularized training method that uses a global structure-aware masked language (score) model with a self-attention mechanism pretrained from an extensive collection of drum scores. The experimental results showed that the proposed regularized model outperformed the conventional RNN-based model in terms of the tatum-level error rate and the frame-level F-measure, even when only a limited amount of paired data was available so that the non-regularized model underperformed the RNN-based model.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Hong Kai

Because of the difficulty of music feature recognition due to the complex and varied music theory knowledge influenced by music specialization, we designed a music feature recognition system based on Internet of Things (IoT) technology. The physical sensing layer of the system places sound sensors at different locations to collect the original music signals and uses a digital signal processor to carry out music signal analysis and processing. The network transmission layer transmits the completed music signals to the music signal database in the application layer of the system. The music feature analysis module of the application layer uses a dynamic time regularization algorithm to obtain the maximum similarity between the test template and the reference. The music feature analysis module of the application layer uses the dynamic time regularization algorithm to obtain the maximum similarity between the test template and the reference template to realize the feature recognition of the music signal and determine the music pattern and music emotion corresponding to the music feature content according to the recognition result. The experimental results show that the system operates stably, can capture high-quality music signals, and can correctly identify music style features and emotion features. The results of this study can meet the needs of composers’ assisted creation and music researchers’ analysis of a large amount of music data, and the results can be further transferred to deep music learning research, human-computer interaction music creation, application-based music creation, and other fields for expansion.


Sign in / Sign up

Export Citation Format

Share Document