An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

2016 ◽  
Vol 74 ◽  
pp. 11-20 ◽  
Author(s):  
Qiong Zhang ◽  
Xavier Maldague
2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Ruochen Liu ◽  
Han Wang ◽  
Jinwu Zhang ◽  
Shuangshuang Gu ◽  
Jianzhong Sun

Electrostatic monitoring is a unique and rapid developing technique applied in the prognostics and health management of the tribological system based on electrostatic charging and sensing phenomenon. It has considerable advantages in condition monitoring of tribo-contacts with high sensitivity and resolution. Unfortunately, the monitoring result can be affected due to the switch of operating conditions that reduces its accuracy. This paper presents a dynamic adaptive fusion approach, moving window local outlier factor based on electrostatic features to overcome the influence. Life cycle experiments of rolling bearings and railcar gearbox were carried out on an electrostatic monitoring platform. The MWLOF method was used to extract and analyze the experimental data, combined with the Pauta criterion to judge wear faults quantitatively, and compare with other feature extraction results. It is verified that the proposed method can overcome the influence of changes in working conditions on the monitoring results, improve the monitoring sensitivity, and provide an accurate reference for friction and wear faults.


Author(s):  
Wadii Boulila ◽  
Karim S. Ettabaa ◽  
Imed Riadh Farah ◽  
Basel Solaiman

2019 ◽  
Vol 11 (22) ◽  
pp. 2691 ◽  
Author(s):  
Gang He ◽  
Jiaping Zhong ◽  
Jie Lei ◽  
Yunsong Li ◽  
Weiying Xie

Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Shuai Hao ◽  
Beiyi An ◽  
Tian He ◽  
Xu Ma ◽  
Hu Wen ◽  
...  

2016 ◽  
Vol 24 (7) ◽  
pp. 1743-1753 ◽  
Author(s):  
王 昕 WANG Xin ◽  
吉桐伯 JI Tong-bo ◽  
刘 富 LIU Fu

2020 ◽  
Vol 4 (3) ◽  
pp. 46
Author(s):  
Mohammad Faridul Haque Siddiqui ◽  
Ahmad Y. Javaid

The exigency of emotion recognition is pushing the envelope for meticulous strategies of discerning actual emotions through the use of superior multimodal techniques. This work presents a multimodal automatic emotion recognition (AER) framework capable of differentiating between expressed emotions with high accuracy. The contribution involves implementing an ensemble-based approach for the AER through the fusion of visible images and infrared (IR) images with speech. The framework is implemented in two layers, where the first layer detects emotions using single modalities while the second layer combines the modalities and classifies emotions. Convolutional Neural Networks (CNN) have been used for feature extraction and classification. A hybrid fusion approach comprising early (feature-level) and late (decision-level) fusion, was applied to combine the features and the decisions at different stages. The output of the CNN trained with voice samples of the RAVDESS database was combined with the image classifier’s output using decision-level fusion to obtain the final decision. An accuracy of 86.36% and similar recall (0.86), precision (0.88), and f-measure (0.87) scores were obtained. A comparison with contemporary work endorsed the competitiveness of the framework with the rationale for exclusivity in attaining this accuracy in wild backgrounds and light-invariant conditions.


2011 ◽  
Author(s):  
Guang Yang ◽  
Yafeng Yin ◽  
Hong Man ◽  
Sachi Desai

Sign in / Sign up

Export Citation Format

Share Document