singer identification
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 19)

H-INDEX

10
(FIVE YEARS 1)

songs are the compositions embedding voice and different instrument’s sound. Different human emotions can be created by playing the appropriate song .autocorrelation algorithm is used here to find out singer identification. In the first experiment three singers with three hindi songs (vocal) are taken as data set. Tempo is used as musical features. Then autocorrelation is proposed on concerning a total of three singers. Using bartlett test we have found the most significant autocorrelation values of those songs of three singers. In second experiment three singers with one hindi song (vocal) are taken as data set. Here rms is used as musical features. Then autocorrelation is proposed on concerning those three singers. Using bartlett test we have found the insignificant autocorrelation values of the song of three singers. The first experiment is used to identify the singers for each song. Here three singers identify their own identification test giving most significant values of their songs .the second experiment gives the insignificant value. The insignificance values of musical features of three singers does not give the singer’s identification test


Author(s):  
Sangeetha Rajesh ◽  
N. J. Nalini

Singer identification is a challenging task in music information retrieval because of the combined instrumental music with the singing voice. The previous approaches focus on identification of singers based on individual features extracted from the music clips. The objective of this work is to combine Mel Frequency Cepstral Coefficients (MFCC) and Chroma DCT-reduced Pitch (CRP) features for singer identification system (SID) using machine learning techniques. The proposed system has mainly two phases. In the feature extraction phase, MFCC, [Formula: see text]MFCC, [Formula: see text]MFCC and CRP features are extracted from the music clips. In the identification phase, extracted features are trained with Bidirectional Long Short-Term Memory (BLSTM)-based Recurrent Neural Networks (RNN) and Convolution Neural Networks (CNN) and tested to identify different singer classes. The identification accuracy and Equal Error Rate (EER) are used as performance measures. Further, the experiments also demonstrate the effectiveness of score level fusion of MFCC and CRP feature in the singer identification system. Also, the experimental results are compared with the baseline system using support vector machines (SVM).


2020 ◽  
Vol 17 (4) ◽  
pp. 507-514
Author(s):  
Sidra Sajid ◽  
Ali Javed ◽  
Aun Irtaza

Speech and music segregation from a single channel is a challenging task due to background interference and intermingled signals of voice and music channels. It is of immense importance due to its utility in wide range of applications such as music information retrieval, singer identification, lyrics recognition and alignment. This paper presents an effective method for speech and music segregation. Considering the repeating nature of music, we first detect the local repeating structures in the signal using a locally defined window for each segment. After detecting the repeating structure, we extract them and perform separation using a soft time-frequency mask. We apply an ideal binary mask to enhance the speech and music intelligibility. We evaluated the proposed method on the mixtures set at -5 dB, 0 dB, 5 dB from Multimedia Information Retrieval-1000 clips (MIR-1K) dataset. Experimental results demonstrate that the proposed method for speech and music segregation outperforms the existing state-of-the-art methods in terms of Global-Normalized-Signal-to-Distortion Ratio (GNSDR) values


Age-related changes to the vocal structure affect the singing ability of the singer. We present a longitudinal study of vocal ageing of a female professional playback singer having more than six decades of singing span (covering singer age from 19 to 80 years). The ageing analysis is performed on six vocal parameters like – fundamental frequency (F0), vibrato, formants and spectral features like spectral roll-off and centroid. Statistical variations in these vocal parameters over the entire singing span of the singer are discussed in the paper. Significant effects noted with the ageing voice were - decrease in F0, decreased vocal range, reduction in vibrato rate, increase in vibrato extent, decrease in F2 & F4 formants and rapid change in the spectral features. This investigation also studied the effect of ageing on singing voice quality through the measurement of singing power ratio (SPR). Increase in SPR measures was observed with ageing voice. The study of impact of vocal ageing with longitudinal data on singer identification (SID) is scare. The SID experimentation performed with 350 cappella songs covering entire singing span of the singer, showed a clear impact that change in acoustical parameters with ageing affected the performance of singer identification systems.


Sign in / Sign up

Export Citation Format

Share Document