Heterogeneous multi-parameter feature-level fusion for multi-source power sensing terminals: fusion mode, fusion framework and scene verification

2021 ◽  
Author(s):  
Yanzhi Sun ◽  
Yuming Liu ◽  
Feng Tian ◽  
Chen Cui ◽  
Chaoguang Li
2020 ◽  
Vol 10 (8) ◽  
pp. 2928 ◽  
Author(s):  
Rui Zhang ◽  
Xinming Tang ◽  
Shucheng You ◽  
Kaifeng Duan ◽  
Haiyan Xiang ◽  
...  

Remote sensing data plays an important role in classifying land use/land cover (LULC) information from various sensors having different spectral, spatial and temporal resolutions. The fusion of an optical image and a synthetic aperture radar (SAR) image is significant for the study of LULC change and simulation in cloudy mountain areas. This paper proposes a novel feature-level fusion framework, in which the Landsat operational land imager (OLI) images with different cloud covers, and a fully polarized Advanced Land Observing Satellite-2 (ALOS-2) image are selected to conduct LULC classification experiments. We take the karst mountain in Chongqing as a study area, following which the features of the spectrum, texture, and space of the optical and SAR images are extracted, respectively, supplemented by the normalized difference vegetation index (NDVI), elevation, slope and other relevant information. Furthermore, the fused feature image is subjected to object-oriented multi-scale segmentation, subsequently, an improved support vector machine (SVM) model is used to conduct the experiment. The results showed that the proposed framework has the advantages of multi-source data feature fusion, high classification performance and can be applied in mountain areas. The overall accuracy (OA) was more than 85%, with the Kappa coefficient values of 0.845. In terms of forest, gardenland, water, and artificial surfaces, the precision of fusion image was higher compared to single data source. In addition, ALOS-2 data have a comparative advantage in the extraction of shrubland, water, and artificial surfaces. This work aims to provide a reference for selecting the suitable data and methods for LULC classification in cloudy mountain areas. When in cloudy mountain areas, the fusion features of images should be preferred, during the period of low cloudiness, the Landsat OLI data should be selected, when no optical remote sensing data are available, and the fully polarized ALOS-2 data are an appropriate substitute.


2010 ◽  
Vol 2 (1) ◽  
pp. 28-38 ◽  
Author(s):  
K. Kannan ◽  
S. Arumuga Perumal ◽  
K. Arulmozhi

2021 ◽  
Author(s):  
Zhibing Xie

Understanding human emotional states is indispensable for our daily interaction, and we can enjoy more natural and friendly human computer interaction (HCI) experience by fully utilizing human’s affective states. In the application of emotion recognition, multimodal information fusion is widely used to discover the relationships of multiple information sources and make joint use of a number of channels, such as speech, facial expression, gesture and physiological processes. This thesis proposes a new framework of emotion recognition using information fusion based on the estimation of information entropy. The novel techniques of information theoretic learning are applied to feature level fusion and score level fusion. The most critical issues for feature level fusion are feature transformation and dimensionality reduction. The existing methods depend on the second order statistics, which is only optimal for Gaussian-like distributions. By incorporating information theoretic tools, a new feature level fusion method based on kernel entropy component analysis is proposed. For score level fusion, most previous methods focus on predefined rule based approaches, which are usually heuristic. In this thesis, a connection between information fusion and maximum correntropy criterion is established for effective score level fusion. Feature level fusion and score level fusion methods are then combined to introduce a two-stage fusion platform. The proposed methods are applied to audiovisual emotion recognition, and their effectiveness is evaluated by experiments on two publicly available audiovisual emotion databases. The experimental results demonstrate that the proposed algorithms achieve improved performance in comparison with the existing methods. The work of this thesis offers a promising direction to design more advanced emotion recognition systems based on multimodal information fusion and has great significance to the development of intelligent human computer interaction systems.


Sign in / Sign up

Export Citation Format

Share Document