scholarly journals Object-to-Scene: Learning to Transfer Object Knowledge to Indoor Scene Recognition

Author(s):  
Bo Miao ◽  
Liguang Zhou ◽  
Ajmal Saeed Mian ◽  
Tin Lun Lam ◽  
Yangsheng Xu
Author(s):  
Alejandra C. Hernandez ◽  
Clara Gomez ◽  
Erik Derner ◽  
Ramon Barber

2015 ◽  
Vol 112 ◽  
pp. 129-136 ◽  
Author(s):  
Jun Yu ◽  
Chaoqun Hong ◽  
Dapeng Tao ◽  
Meng Wang

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3376 ◽  
Author(s):  
Wei Guo ◽  
Ran Wu ◽  
Yanhua Chen ◽  
Xinyan Zhu

With the rapid development of indoor localization in recent years; signals of opportunity have become a reliable and convenient source for indoor localization. The mobile device cannot only capture images of the indoor environment in real-time, but can also obtain one or more different types of signals of opportunity as well. Based on this, we design a convolutional neural network (CNN) model that concatenates features of image data and signals of opportunity for localization by using indoor scene datasets and simulating the situation of indoor location probability. Using the method of transfer learning on the Inception V3 network model feature information is added to assist in scene recognition. The experimental result shows that, for two different experiment sceneries, the accuracies of the prediction results are 97.0% and 96.6% using the proposed model, compared to 69.0% and 81.2% by the method of overlapping positioning information and the base map, and compared to 73.3% and 77.7% by using the fine-tuned Inception V3 model. The accuracy of indoor scene recognition is improved; in particular, the error rate at the spatial connection of different scenes is decreased, and the recognition rate of similar scenes is increased.


2019 ◽  
Vol 73 (1) ◽  
pp. 37-55 ◽  
Author(s):  
B. Anbarasu ◽  
G. Anitha

In this paper, a new scene recognition visual descriptor called Enhanced Scale Invariant Feature Transform-based Sparse coding Spatial Pyramid Matching (Enhanced SIFT-ScSPM) descriptor is proposed by combining a Bag of Words (BOW)-based visual descriptor (SIFT-ScSPM) and Gist-based descriptors (Enhanced Gist-Enhanced multichannel Gist (Enhanced mGist)). Indoor scene classification is carried out by multi-class linear and non-linear Support Vector Machine (SVM) classifiers. Feature extraction methodology and critical review of several visual descriptors used for indoor scene recognition in terms of experimental perspectives have been discussed in this paper. An empirical study is conducted on the Massachusetts Institute of Technology (MIT) 67 indoor scene classification data set and assessed the classification accuracy of state-of-the-art visual descriptors and the proposed Enhanced mGist, Speeded Up Robust Features-Spatial Pyramid Matching (SURF-SPM) and Enhanced SIFT-ScSPM visual descriptors. Experimental results show that the proposed Enhanced SIFT-ScSPM visual descriptor performs better with higher classification rate, precision, recall and area under the Receiver Operating Characteristic (ROC) curve values with respect to the state-of-the-art and the proposed Enhanced mGist and SURF-SPM visual descriptors.


Sign in / Sign up

Export Citation Format

Share Document