The Scene Classification Method Based on Difference Vector in DCT Domain

Author(s):  
Ce Li ◽  
Ming Li ◽  
Limei Xiao ◽  
Beijie Ren
2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Baoyu Dong ◽  
Guang Ren

A new scene classification method is proposed based on the combination of local Gabor features with a spatial pyramid matching model. First, new local Gabor feature descriptors are extracted from dense sampling patches of scene images. These local feature descriptors are embedded into a bag-of-visual-words (BOVW) model, which is combined with a spatial pyramid matching framework. The new local Gabor feature descriptors have sufficient discrimination abilities for dense regions of scene images. Then the efficient feature vectors of scene images can be obtained byK-means clustering method and visual word statistics. Second, in order to decrease classification time and improve accuracy, an improved kernel principal component analysis (KPCA) method is applied to reduce the dimensionality of pyramid histogram of visual words (PHOW). The principal components with the bigger interclass separability are retained in feature vectors, which are used for scene classification by the linear support vector machine (SVM) method. The proposed method is evaluated on three commonly used scene datasets. Experimental results demonstrate the effectiveness of the method.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Shaopeng Liu ◽  
Guohui Tian

Indoor scene classification plays a vital part in environment cognition of service robot. With the development of deep learning, fine-tuning CNN (Convolutional Neural Network) on target datasets has become a popular way to solve classification problems. However, this method cannot obtain satisfying indoor scene classification results because of overfitting when scene training datasets are insufficient. To solve this problem, an indoor scene classification method is proposed in this paper, which utilizes CNN feature of scene images to generate scene category features to classify scenes by a novel feature matching algorithm. The novel feature matching algorithm can further improve the speed of scene classification. In addition, overfitting is eliminated by our method even though the training data is limited. The presented method was evaluated on two benchmark scene datasets, Scene 15 dataset and MIT 67 dataset, acquiring 96.49% and 81.69% accuracy, respectively. The experiment results showed that our method was superior to other scene classification methods in terms of accuracy, speed, and robustness. To further evaluate our method, test experiments on unknown scene images from SUN 397 dataset had been done, and the models based on different training datasets obtained 94.34% and 79.80% test accuracy severally, which proved that the proposed method owned good performance in indoor scene classification.


2020 ◽  
Author(s):  
Wenmei Li ◽  
Juan Wang ◽  
Ziteng Wang ◽  
Yu Wang ◽  
Yan Jia ◽  
...  

Deep convolutional neural network (DeCNN) is considered one of promising techniques for classifying the high spatial resolution remote sensing (HSRRS) scenes, due to its powerful feature extraction capabilities. It is well-known that huge high quality labeled datasets are required for achieving the better classification performances and preventing over-fitting, during the training DeCNN model process. However, the lack of high quality datasets often limits the applications of DeCNN. In order to solve this problem, in this paper, we propose a HSRRS image scene classification method using transfer learning and DeCNN (TL-DeCNN) model in few shot HSRRS scene samples. Specifically, three typical DeCNNs of VGG19, ResNet50 and InceptionV3, trained on the ImageNet2015, the weights of their convolutional layer for that of the TL-DeCNN are transferred, respectively. Then, TL-DeCNN just needs to fine-tune its classification module on the few shot HSRRS scene samples in a few epochs. Experimental results indicate that our proposed TL-DeCNN method provides absolute dominance results without over-fitting, when compared with the VGG19, ResNet50 and InceptionV3, directly trained on the few shot samples.


2020 ◽  
Author(s):  
Wenmei Li ◽  
Juan Wang ◽  
Ziteng Wang ◽  
Yu Wang ◽  
Yan Jia ◽  
...  

Deep convolutional neural network (DeCNN) is considered one of promising techniques for classifying the high spatial resolution remote sensing (HSRRS) scenes, due to its powerful feature extraction capabilities. It is well-known that huge high quality labeled datasets are required for achieving the better classification performances and preventing over-fitting, during the training DeCNN model process. However, the lack of high quality datasets often limits the applications of DeCNN. In order to solve this problem, in this paper, we propose a HSRRS image scene classification method using transfer learning and DeCNN (TL-DeCNN) model in few shot HSRRS scene samples. Specifically, three typical DeCNNs of VGG19, ResNet50 and InceptionV3, trained on the ImageNet2015, the weights of their convolutional layer for that of the TL-DeCNN are transferred, respectively. Then, TL-DeCNN just needs to fine-tune its classification module on the few shot HSRRS scene samples in a few epochs. Experimental results indicate that our proposed TL-DeCNN method provides absolute dominance results without over-fitting, when compared with the VGG19, ResNet50 and InceptionV3, directly trained on the few shot samples.


Author(s):  
Ce Li ◽  
Ming Li ◽  
Meili Xiao ◽  
Zhijia Hu ◽  
Xiuxun Miao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document