An Accurate Device-Free Action Recognition System Using Two-Stream Network

2020 ◽  
Vol 69 (7) ◽  
pp. 7930-7939 ◽  
Author(s):  
Biyun Sheng ◽  
Yuanrun Fang ◽  
Fu Xiao ◽  
Lijuan Sun
2021 ◽  
pp. 404-415
Author(s):  
Biyun Sheng ◽  
Linqing Gui ◽  
Fu Xiao

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 57267-57275 ◽  
Author(s):  
Enqing Chen ◽  
Xue Bai ◽  
Lei Gao ◽  
Haron Chweya Tinega ◽  
Yingqiang Ding

2019 ◽  
Vol 16 (04) ◽  
pp. 1941002 ◽  
Author(s):  
Jing Li ◽  
Yang Mi ◽  
Gongfa Li ◽  
Zhaojie Ju

Facial expression recognition has been widely used in human computer interaction (HCI) systems. Over the years, researchers have proposed different feature descriptors, implemented different classification methods, and carried out a number of experiments on various datasets for automatic facial expression recognition. However, most of them used 2D static images or 2D video sequences for the recognition task. The main limitations of 2D-based analysis are problems associated with variations in pose and illumination, which reduce the recognition accuracy. Therefore, an alternative way is to incorporate depth information acquired by 3D sensor, because it is invariant in both pose and illumination. In this paper, we present a two-stream convolutional neural network (CNN)-based facial expression recognition system and test it on our own RGB-D facial expression dataset collected by Microsoft Kinect for XBOX in unspontaneous scenarios since Kinect is an inexpensive and portable device to capture both RGB and depth information. Our fully annotated dataset includes seven expressions (i.e., neutral, sadness, disgust, fear, happiness, anger, and surprise) for 15 subjects (9 males and 6 females) aged from 20 to 25. The two individual CNNs are identical in architecture but do not share parameters. To combine the detection results produced by these two CNNs, we propose the late fusion approach. The experimental results demonstrate that the proposed two-stream network using RGB-D images is superior to that of using only RGB images or depth images.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4162
Author(s):  
Ma ◽  
Huang ◽  
Li ◽  
Huang ◽  
Ma ◽  
...  

environmental perception technology based onWiFi, and some state-of-the-art techniques haveemerged. The wide application of small-scale motion recognition has aroused people’s concern.Handwritten letter is a kind of small scale motion, and the recognition for small-scale motion basedon WiFi has two characteristics. Small-scale action has little impact on WiFi signals changes inthe environment. The writing trajectories of certain uppercase letters are the same as the writingtrajectories of their corresponding lowercase letters, but they are different in size. These characteristicsbring challenges to small-scale motion recognition. The system for recognizing small-scale motion inmultiple classes with high accuracy urgently needs to be studied. Therefore, we propose MCSM-Wri,a device-free handwritten letter recognition system using WiFi, which leverages channel stateinformation (CSI) values extracted from WiFi packets to recognize handwritten letters, includinguppercase letters and lowercase letters. Firstly, we conducted data preproccessing to provide moreabundant information for recognition. Secondly, we proposed a ten-layers convolutional neuralnetwork (CNN) to solve the problem of the poor recognition due to small impact of small-scaleactions on environmental changes, and it also can solve the problem of identifying actions with thesame trajectory and different sizes by virtue of its multi-scale characteristics. Finally, we collected6240 instances for 52 kinds of handwritten letters from 6 volunteers. There are 3120 instances fromthe lab and 3120 instances are from the utility room. Using 10-fold cross-validation, the accuracyof MCSM-Wri is 95.31%, 96.68%, and 97.70% for the lab, the utility room, and the lab+utility room,respectively. Compared with Wi-Wri and SignFi, we increased the accuracy from 8.96% to 18.13% forrecognizing handwritten letters.


2015 ◽  
Vol 42 (1) ◽  
pp. 138-143
Author(s):  
ByoungChul Ko ◽  
Mincheol Hwang ◽  
Jae-Yeal Nam

2020 ◽  
Vol 27 ◽  
pp. 2188-2188
Author(s):  
Didik Purwanto ◽  
Rizard Renanda Adhi Pramono ◽  
Yie-Tarng Chen ◽  
Wen-Hsien Fang

Sign in / Sign up

Export Citation Format

Share Document