Assessing Individual VR Sickness through Deep Feature Fusion of VR Video and Physiological Response

Author(s):  
Sangmin Lee ◽  
Seongyeop Kim ◽  
Hak Gu Kim ◽  
Yong Man Ro
IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 26138-26146
Author(s):  
Xue Ni ◽  
Huali Wang ◽  
Fan Meng ◽  
Jing Hu ◽  
Changkai Tong
Keyword(s):  

2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


Author(s):  
Wenting Zhao ◽  
Yunhong Wang ◽  
Xunxun Chen ◽  
Yuanyan Tang ◽  
Qingjie Liu
Keyword(s):  

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 98555-98564
Author(s):  
Chao Wen ◽  
Zhan Li ◽  
Aiping Li ◽  
Jian Qu

Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 622 ◽  
Author(s):  
Xiaoyang Liu ◽  
Wei Jing ◽  
Mingxuan Zhou ◽  
Yuxing Li

Automatic coal-rock recognition is one of the critical technologies for intelligent coal mining and processing. Most existing coal-rock recognition methods have some defects, such as unsatisfactory performance and low robustness. To solve these problems, and taking distinctive visual features of coal and rock into consideration, the multi-scale feature fusion coal-rock recognition (MFFCRR) model based on a multi-scale Completed Local Binary Pattern (CLBP) and a Convolution Neural Network (CNN) is proposed in this paper. Firstly, the multi-scale CLBP features are extracted from coal-rock image samples in the Texture Feature Extraction (TFE) sub-model, which represents texture information of the coal-rock image. Secondly, the high-level deep features are extracted from coal-rock image samples in the Deep Feature Extraction (DFE) sub-model, which represents macroscopic information of the coal-rock image. The texture information and macroscopic information are acquired based on information theory. Thirdly, the multi-scale feature vector is generated by fusing the multi-scale CLBP feature vector and deep feature vector. Finally, multi-scale feature vectors are input to the nearest neighbor classifier with the chi-square distance to realize coal-rock recognition. Experimental results show the coal-rock image recognition accuracy of the proposed MFFCRR model reaches 97.9167%, which increased by 2%–3% compared with state-of-the-art coal-rock recognition methods.


Sign in / Sign up

Export Citation Format

Share Document