A New Multi-task Learning Method for Personalized Activity Recognition

Author(s):  
Xu Sun ◽  
Hisashi Kashima ◽  
Ryota Tomioka ◽  
Naonori Ueda ◽  
Ping Li
Symmetry ◽  
2018 ◽  
Vol 10 (9) ◽  
pp. 385 ◽  
Author(s):  
Yoosoo Jeong ◽  
Seungmin Lee ◽  
Daejin Park ◽  
Kil Park

Recently, there have been many studies on the automatic extraction of facial information using machine learning. Age estimation from front face images is becoming important, with various applications. Our proposed work is based on the binary classifier, which only determines whether two input images are clustered in a similar class, and trains the convolutional neural networks (CNNs) model using the deep metric learning method based on the Siamese network. To converge the results of the training Siamese network, two classes, for which age differences are below a certain level of distance, are considered as the same class, so the ratio of positive database images is increased. The deep metric learning method trains the CNN model to measure similarity based on only age data, but we found that the accumulated gender data can also be used to compare ages. From this experimental fact, we adopted a multi-task learning approach to consider the gender data for more accurate age estimation. In the experiment, we evaluated our approach using MORPH and MegaAge-Asian datasets, and compared gender classification accuracy only using age data from the training images. In addition, from the gender classification, we found that our proposed architecture, which is trained with only age data, performs age comparison by using the self-generated gender feature. The accuracy enhancement by multi-task learning, for the simultaneous consideration of age and gender data, is discussed. Our approach results in the best accuracy among the methods based on deep metric learning on MORPH dataset. Additionally, our method is also the best results compared with the results of the state of art in terms of age estimation on MegaAge Asian and MORPH datasets.


2020 ◽  
Vol 2 (4) ◽  
pp. 288-298
Author(s):  
Yinggang Li ◽  
Shigeng Zhang ◽  
Bing Zhu ◽  
Weiping Wang

2019 ◽  
Vol 351 ◽  
pp. 146-155 ◽  
Author(s):  
Jianhua Su ◽  
Bin Chen ◽  
Hong Qiao ◽  
Zhi-yong Liu

2020 ◽  
Vol 190 ◽  
pp. 105137 ◽  
Author(s):  
Yanshan Xiao ◽  
Zheng Chang ◽  
Bo Liu

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5852
Author(s):  
Yuanzhi Wang ◽  
Tao Lu ◽  
Tao Zhang ◽  
Yuntao Wu

Pedestrian detection is an essential problem of computer vision, which has achieved tremendous success under controllable conditions using visible light imaging sensors in recent years. However, most of them do not consider low-light environments which are very common in real-world applications. In this paper, we propose a novel pedestrian detection algorithm using multi-task learning to address this challenge in low-light environments. Specifically, the proposed multi-task learning method is different from the most commonly used multi-task learning method—the parameter sharing mechanism—in deep learning. We design a novel multi-task learning method with feature-level fusion and a sharing mechanism. The proposed approach contains three parts: an image relighting subnetwork, a pedestrian detection subnetwork, and a feature-level multi-task fusion learning module. The image relighting subnetwork adjusts the low-light image quality for detection, the pedestrian detection subnetwork learns enhanced features for prediction, and the feature-level multi-task fusion learning module fuses and shares features among component networks for boosting image relighting and detection performance simultaneously. Experimental results show that the proposed approach consistently and significantly improves the performance of pedestrian detection on low-light images obtained by visible light imaging sensor.


2021 ◽  
Author(s):  
Jiacheng Mai ◽  
zhiyuan chen ◽  
Chunzhi Yi ◽  
Zhen Ding

Abstract Lower limbs exoskeleton robots improve the motor ability of humans and can facilitate superior rehabilitative training. By training large datasets, many of the currently available mobile and signal devices that may be worn on the body can employ machine learning approaches to forecast and classify people's movement characteristics. This approach could help exoskeleton robots improve their ability to predict human activities. Two popular data sets are PAMAP2, which was obtained by measuring people's movement through inertial sensors, and WISDM, which was collected people's activity information through mobile phones. With the focus on human activity recognition, this paper applied the traditional machine learning method and deep learning method to train and test these datasets, whereby it was found that the prediction performance of a decision tree model was highest on these two data sets, which is 99% and 72% separately, and the time consumption of decision tree is the least. In addition, a comparison of the signals collected from different parts of the human body showed that the signals deriving from the hands presented the best performance in terms of recognizing human movement types.


2022 ◽  
Vol 40 (4) ◽  
pp. 1-28
Author(s):  
Peng Zhang ◽  
Baoxi Liu ◽  
Tun Lu ◽  
Xianghua Ding ◽  
Hansu Gu ◽  
...  

User-generated contents (UGC) in social media are the direct expression of users’ interests, preferences, and opinions. User behavior prediction based on UGC has increasingly been investigated in recent years. Compared to learning a person’s behavioral patterns in each social media site separately, jointly predicting user behavior in multiple social media sites and complementing each other (cross-site user behavior prediction) can be more accurate. However, cross-site user behavior prediction based on UGC is a challenging task due to the difficulty of cross-site data sampling, the complexity of UGC modeling, and uncertainty of knowledge sharing among different sites. For these problems, we propose a Cross-Site Multi-Task (CSMT) learning method to jointly predict user behavior in multiple social media sites. CSMT mainly derives from the hierarchical attention network and multi-task learning. Using this method, the UGC in each social media site can obtain fine-grained representations in terms of words, topics, posts, hashtags, and time slices as well as the relevances among them, and prediction tasks in different social media sites can be jointly implemented and complement each other. By utilizing two cross-site datasets sampled from Weibo, Douban, Facebook, and Twitter, we validate our method’s superiority on several classification metrics compared with existing related methods.


Sign in / Sign up

Export Citation Format

Share Document