MultiK-MHKS: A Novel Multiple Kernel Learning Algorithm

2008 ◽  
Vol 30 (2) ◽  
pp. 348-353 ◽  
Author(s):  
Zhe Wang ◽  
Songcan Chen ◽  
Tingkai Sun
2018 ◽  
Vol 112 ◽  
pp. 111-117 ◽  
Author(s):  
Qingchao Wang ◽  
Guangyuan Fu ◽  
Linlin Li ◽  
Hongqiao Wang ◽  
Yongqiang Li

2018 ◽  
Vol 23 (11) ◽  
pp. 3697-3706
Author(s):  
Qingchao Wang ◽  
Guangyuan Fu ◽  
Hongqiao Wang ◽  
Linlin Li ◽  
Shuai Huang

2019 ◽  
Vol 23 (5) ◽  
pp. 1990-2001 ◽  
Author(s):  
Vangelis P. Oikonomou ◽  
Spiros Nikolopoulos ◽  
Ioannis Kompatsiaris

2018 ◽  
Vol 21 (2) ◽  
pp. 52-63 ◽  
Author(s):  
Viet Hoai Vo ◽  
Hoang Minh Pham

Introduction: Recognizing human activity in a daily environment has attracted much research in computer vision and recognition in recent years. It is a difficult and challenging topic not only inasmuch as the variations of background clutter, occlusion or intra-class variation in image sequences but also inasmuch as complex patterns of activity are created by interactions among people-people or people-objects. In addition, it also is very valuable for many practical applications, such as smart home, gaming, health care, human-computer interaction and robotics. Now, we are living in the beginning age of the industrial revolution 4.0 where intelligent systems have become the most important subject, as reflected in the research and industrial communities. There has been emerging advances in 3D cameras, such as Microsoft's Kinect and Intel's RealSense, which can capture RGB, depth and skeleton in real time. This creates a new opportunity to increase the capabilities of recognizing the human activity in the daily environment. In this research, we propose a novel approach of daily activity recognition and hypothesize that the performance of the system can be promoted by combining multimodal features. Methods: We extract spatial-temporal feature for the human body with representation of parts based on skeleton data from RGB-D data. Then, we combine multiple features from the two sources to yield the robust features for activity representation. Finally, we use the Multiple Kernel Learning algorithm to fuse multiple features to identify the activity label for each video. To show generalizability, the proposed framework has been tested on two challenging datasets by cross-validation scheme. Results: The experimental results show a good outcome on both CAD120 and MSR-Daily Activity 3D datasets with 94.16% and 95.31% in accuracy, respectively. Conclusion: These results prove our proposed methods are effective and feasible for activity recognition system in the daily environment.  


2020 ◽  
Vol 13 (1) ◽  
pp. 50
Author(s):  
Lei Pan ◽  
Chengxun He ◽  
Yang Xiang ◽  
Le Sun

In this paper, superpixel features and extended multi-attribute profiles (EMAPs) are embedded in a multiple kernel learning framework to simultaneously exploit the local and multiscale information in both spatial and spectral dimensions for hyperspectral image (HSI) classification. First, the original HSI is reduced to three principal components in the spectral domain using principal component analysis (PCA). Then, a fast and efficient segmentation algorithm named simple linear iterative clustering is utilized to segment the principal components into a certain number of superpixels. By setting different numbers of superpixels, a set of multiscale homogenous regional features is extracted. Based on those extracted superpixels and their first-order adjacent superpixels, EMAPs with multimodal features are extracted and embedded into the multiple kernel framework to generate different spatial and spectral kernels. Finally, a PCA-based kernel learning algorithm is used to learn an optimal kernel that contains multiscale and multimodal information. The experimental results on two well-known datasets validate the effectiveness and efficiency of the proposed method compared with several state-of-the-art HSI classifiers.


Sign in / Sign up

Export Citation Format

Share Document