Research on Kernel-Based Feature Fusion Algorithm in Multimodal Recognition

Author(s):  
Xu Xiaona ◽  
Pan Xiuqin ◽  
Zhao Yue ◽  
Pu Qiumei
2021 ◽  
pp. 1-18
Author(s):  
R.S. Rampriya ◽  
Sabarinathan ◽  
R. Suganya

In the near future, combo of UAV (Unmanned Aerial Vehicle) and computer vision will play a vital role in monitoring the condition of the railroad periodically to ensure passenger safety. The most significant module involved in railroad visual processing is obstacle detection, in which caution is obstacle fallen near track gage inside or outside. This leads to the importance of detecting and segment the railroad as three key regions, such as gage inside, rails, and background. Traditional railroad segmentation methods depend on either manual feature selection or expensive dedicated devices such as Lidar, which is typically less reliable in railroad semantic segmentation. Also, cameras mounted on moving vehicles like a drone can produce high-resolution images, so segmenting precise pixel information from those aerial images has been challenging due to the railroad surroundings chaos. RSNet is a multi-level feature fusion algorithm for segmenting railroad aerial images captured by UAV and proposes an attention-based efficient convolutional encoder for feature extraction, which is robust and computationally efficient and modified residual decoder for segmentation which considers only essential features and produces less overhead with higher performance even in real-time railroad drone imagery. The network is trained and tested on a railroad scenic view segmentation dataset (RSSD), which we have built from real-time UAV images and achieves 0.973 dice coefficient and 0.94 jaccard on test data that exhibits better results compared to the existing approaches like a residual unit and residual squeeze net.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012021
Author(s):  
Hongyang Zhao ◽  
Qiang Xie

Abstract In view of the fact that the traditional graph model method which only considers statistical features or general semantic features when extracting keywords from existing massive educational resources, lacks the function of mining and utilizing multi-factor semantic features, this paper proposes an improved TextRank-based algorithm for keyword extraction of educational resources. According to the characteristics of Chinese text and the shortcomings of traditional TextRank algorithm, the improved algorithm featuring multi-feature fusion is developed using the importance of words in the corpus, the location information in the text and the attributes of words. Experimental results show that this method has higher accuracy, recall rate, and F-measure value than traditional algorithms in the process of keyword extraction of educational resources, which improves the quality of keyword extraction and is beneficial to better utilization and management of educational resources.


2014 ◽  
Vol 610 ◽  
pp. 393-400
Author(s):  
Jie Cao ◽  
Xuan Liang

Complex background, especially when the object is similar to the background in color or the target gets blocked, can easily lead to tracking failure. Therefore, a fusion algorithm based on features confidence and similarity was proposed, it can adaptively adjust fusion strategy when occlusion occurs. And this confidence is used among occlusion detection, to overcome the problem of inaccurate occlusion determination when blocked by analogue. The experimental results show that the proposed algorithm is more robust in the case of the cover, but also has good performance under other complex scenes.


Author(s):  
Jiuwen Cao ◽  
Dinghan Hu ◽  
Yaomin Wang ◽  
Jianzhong Wang ◽  
Baiying Lei

2017 ◽  
Vol 22 (S5) ◽  
pp. 10883-10895
Author(s):  
Gui-Xian Xu ◽  
Hai-Shen Yao ◽  
Changzhi Wang

2019 ◽  
Vol 16 (4) ◽  
pp. 263-274
Author(s):  
Chunhua Zhang ◽  
Sijia Guo ◽  
Jingbo Zhang ◽  
Xizi Jin ◽  
Yanwen Li ◽  
...  

Protein-protein interactions play an important role in biological and cellular processes. Biochemistry experiment is the most reliable approach identifying protein-protein interactions, but it is time-consuming and expensive. It is one of the important reasons why there is only a little fraction of complete protein-protein interactions networks available by far. Hence, accurate computational methods are in a great need to predict protein-protein interactions. In this work, we proposed a new weighted feature fusion algorithm for protein-protein interactions prediction, which extracts both protein sequence feature and evolutionary feature, for the purpose to use both global and local information to identify protein-protein interactions. The method employs maximum margin criterion for feature selection and support vector machine for classification. Experimental results on 11188 protein pairs showed that our method had better performance and robustness. Performed on the independent database of Helicobacter pylori, the method achieved 99.59% sensitivity and 93.66% prediction accuracy, while the maximum margin criterion is 88.03%. The results indicated that our method was more efficient in predicting protein-protein interaction compared with other six state-of-the-art peer methods.


2020 ◽  
Vol 12 (5) ◽  
pp. 781 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yang Chen ◽  
Wenhai Xu

Infrared and visible image fusion technology provides many benefits for human vision and computer image processing tasks, including enriched useful information and enhanced surveillance capabilities. However, existing fusion algorithms have faced a great challenge to effectively integrate visual features from complex source images. In this paper, we design a novel infrared and visible image fusion algorithm based on visual attention technology, in which a special visual attention system and a feature fusion strategy based on the saliency maps are proposed. Special visual attention system first utilizes the co-occurrence matrix to calculate the image texture complication, which can select a particular modality to compute a saliency map. Moreover, we improved the iterative operator of the original visual attention model (VAM), a fair competition mechanism is designed to ensure that the visual feature in detail regions can be extracted accurately. For the feature fusion strategy, we use the obtained saliency map to combine the visual attention features, and appropriately enhance the tiny features to ensure that the weak targets can be observed. Different from the general fusion algorithm, the proposed algorithm not only preserve the interesting region but also contain rich tiny details, which can improve the visual ability of human and computer. Moreover, experimental results in complicated ambient conditions show that the proposed algorithm in this paper outperforms state-of-the-art algorithms in both qualitative and quantitative evaluations, and this study can extend to the field of other-type image fusion.


Sign in / Sign up

Export Citation Format

Share Document