audio retrieval
Recently Published Documents


TOTAL DOCUMENTS

100
(FIVE YEARS 13)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Andreea-Maria Oncescu ◽  
A. Sophia Koepke ◽  
João F. Henriques ◽  
Zeynep Akata ◽  
Samuel Albanie

Author(s):  
Petcharat Panyapanuwat ◽  
Suwatchai Kamonsantiroj ◽  
Luepol Pipanmaekaporn

Due to its efficiency in storage and search speed, binary hashing has become an attractive approach for a large audio database search. However, most existing hashing-based methods focus on data-independent scheme where random linear projections or some arithmetic expression are used to construct hash functions. Hence, the binary codes do not preserve the similarity and may degrade the search performance. In this paper, an unsupervised similarity-preserving hashing method for content-based audio retrieval is proposed. Different from data-independent hashing methods, we develop a deep network to learn compact binary codes from multiple hierarchical layers of nonlinear and linear transformations such that the similarity between samples is preserved. The independence and balance properties are included and optimized in the objective function to improve the codes. Experimental results on the Extended Ballroom dataset with 8 genres of 3,000 musical excerpts show that our proposed method significantly outperforms state-of-the-art data-independent method in both effectiveness and efficiency.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1483
Author(s):  
Maoshen Jia ◽  
Tianhao Li ◽  
Jing Wang

With the appearance of a large amount of audio data, people have a higher demand for audio retrieval, which can quickly and accurately find the required information. Audio fingerprint retrieval is a popular choice because of its excellent performance. However, there is a problem about the large amount of audio fingerprint data in the existing audio fingerprint retrieval method which takes up more storage space and affects the retrieval speed. Aiming at the problem, this paper presents a novel audio fingerprinting method based on locally linear embedding (LLE) that has smaller fingerprints and the retrieval is more efficient. The proposed audio fingerprint extraction divides the bands around each peak in the frequency domain into four groups of sub-regions and the energy of every sub-region is computed. Then the LLE is performed for each group, respectively, and the audio fingerprint is encoded by comparing adjacent energies. To solve the distortion of linear speed changes, a matching strategy based on dynamic time warping (DTW) is adopted in the retrieval part which can compare two audio segments with different lengths. To evaluate the retrieval performance of the proposed method, the experiments are carried out under different conditions of single and multiple groups’ dimensionality reduction. Both of them can achieve a high recall and precision rate and has a better retrieval efficiency with less data compared with some state-of-the-art methods.


Author(s):  
Yue Song ◽  
Sha Tao ◽  
Yanzhao Ren ◽  
Xinliang Liu ◽  
Wanlin Gao

2020 ◽  
Vol 56 (5) ◽  
pp. 245-247 ◽  
Author(s):  
Xueshuai Zhang ◽  
Ge Zhan ◽  
Wenchao Wang ◽  
Pengyuan Zhang ◽  
Yonghong Yan

2019 ◽  
Vol 8 (02) ◽  
pp. 24469-24472
Author(s):  
Thiruven Gatanadhan R

Automatic audio classification is very useful in audio indexing; content based audio retrieval and online audio distribution. This paper deals with the Speech/Music classification problem, starting from a set of features extracted directly from audio data. Automatic audio classification is very useful in audio indexing; content based audio retrieval and online audio distribution. The accuracy of the classification relies on the strength of the features and classification scheme. In this work Perceptual Linear Prediction (PLP) features are extracted from the input signal. After feature extraction, classification is carried out, using Support Vector Model (SVM) model. The proposed feature extraction and classification models results in better accuracy in speech/music classification.


Sign in / Sign up

Export Citation Format

Share Document