spatial pyramid matching
Recently Published Documents


TOTAL DOCUMENTS

79
(FIVE YEARS 18)

H-INDEX

11
(FIVE YEARS 2)

2022 ◽  
Vol 70 (3) ◽  
pp. 5039-5058
Author(s):  
Khuram Nawaz Khayam ◽  
Zahid Mehmood ◽  
Hassan Nazeer Chaudhry ◽  
Muhammad Usman Ashraf ◽  
Usman Tariq ◽  
...  

Author(s):  
Y. Yang ◽  
D. Zhu ◽  
F. Ren ◽  
C. Cheng

Abstract. Remote sensing earth observation images have a wide range of applications in areas like urban planning, agriculture, environment monitoring, etc. While the industrial world benefits from availability of high resolution earth observation images since recent years, interpreting such images has become more challenging than ever. Among many machine learning based methods that have worked out successfully in remote sensing scene classification, spatial pyramid matching using sparse coding (ScSPM) is a classical model that has achieved promising classification accuracy on many benchmark data sets. ScSPM is a three-stage algorithm, composed of dictionary learning, sparse representation and classification. It is generally believed that in the dictionary learning stage, although unsupervised, one should use the same data set as classification stage to get good results. However, recent studies in transfer learning suggest that it might be a better strategy to train the dictionary on a larger data set different from the one to classify.In our work, we propose an algorithm that combines ScSPM with self-taught learning, a transfer learning framework that trains a dictionary on an unlabeled data set and uses it for multiple classification tasks. In the experiments, we learn the dictionary on Caltech-101 data set, and classify two remote sensing scene image data sets: UC Merced LandUse data set and Changping data set. Experimental results show that the classification accuracy of proposed method is compatible to that of ScSPM. Our work thus provides a new way to reduce resource cost in learning a remote sensing scene image classifier.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 166071-166082
Author(s):  
Shiyuan Chen ◽  
Xiaojiang Li ◽  
Shaoquan Chi ◽  
Zhiliang Li ◽  
Mao Yuxing

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 22463-22472 ◽  
Author(s):  
Priyabrata Karmakar ◽  
Shyh Wei Teng ◽  
Guojun Lu ◽  
Dengsheng Zhang

2019 ◽  
Vol 73 (1) ◽  
pp. 37-55 ◽  
Author(s):  
B. Anbarasu ◽  
G. Anitha

In this paper, a new scene recognition visual descriptor called Enhanced Scale Invariant Feature Transform-based Sparse coding Spatial Pyramid Matching (Enhanced SIFT-ScSPM) descriptor is proposed by combining a Bag of Words (BOW)-based visual descriptor (SIFT-ScSPM) and Gist-based descriptors (Enhanced Gist-Enhanced multichannel Gist (Enhanced mGist)). Indoor scene classification is carried out by multi-class linear and non-linear Support Vector Machine (SVM) classifiers. Feature extraction methodology and critical review of several visual descriptors used for indoor scene recognition in terms of experimental perspectives have been discussed in this paper. An empirical study is conducted on the Massachusetts Institute of Technology (MIT) 67 indoor scene classification data set and assessed the classification accuracy of state-of-the-art visual descriptors and the proposed Enhanced mGist, Speeded Up Robust Features-Spatial Pyramid Matching (SURF-SPM) and Enhanced SIFT-ScSPM visual descriptors. Experimental results show that the proposed Enhanced SIFT-ScSPM visual descriptor performs better with higher classification rate, precision, recall and area under the Receiver Operating Characteristic (ROC) curve values with respect to the state-of-the-art and the proposed Enhanced mGist and SURF-SPM visual descriptors.


Sign in / Sign up

Export Citation Format

Share Document