Real-time people counting for indoor scenes

2016 ◽  
Vol 124 ◽  
pp. 27-35 ◽  
Author(s):  
Jun Luo ◽  
Jinqiao Wang ◽  
Huazhong Xu ◽  
Hanqing Lu
2020 ◽  
Vol 2020 ◽  
pp. 1-20
Author(s):  
Zhanjun Hao ◽  
Yu Duan ◽  
Xiaochao Dang ◽  
Tong Zhang

WiFi indoor personnel behavior recognition has become the core technology of wireless network perception. However, the existing human behavior recognition methods have great challenges in terms of detection accuracy, intrusion, and complexity of operations. In this paper, we firstly analyze and summarize the existing human motion recognition schemes, and due to the existence of the problems in them, we propose a noninvasive, highly robust complex human motion recognition scheme based on Channel State Information (CSI), that is, CSI-HC, and the traditional Chinese martial art XingYiQuan is verified as a complex motion background. CSI-HC is divided into two phases: offline and online. In the offline phase, the human motion data are collected on the commercial Atheros NIC and a powerful denoising method is constructed by using the Butterworth low-pass filter and wavelet function to filter the outliers in the motion data. Then, through Restricted Boltzmann Machine (RBM) training and classification, we establish offline fingerprint information. In the online phase, SoftMax regression is used to correct the RBM classification to process the motion data collected in real time and the processed real-time data are matched with the offline fingerprint information. On this basis, the recognition of a complex human motion is realized. Finally, through repeated experiments in three classical indoor scenes, the parameter setting and user diversity affecting the accuracy of motion recognition are analyzed and the robustness of CSI-HC is detected. In addition, the performance of the proposed method is compared with that of the existing motion recognition methods. The experimental results show that the average motion recognition rate of CSI-HC in three classic indoor scenes reaches 85.4%, in terms of motion complexity and indoor recognition accuracy. Compared with other algorithms, it has higher stability and robustness.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Wei Li ◽  
Junhua Gu ◽  
Benwen Chen ◽  
Jungong Han

Scene parsing plays a crucial role when accomplishing human-robot interaction tasks. As the “eye” of the robot, RGB-D camera is one of the most important components for collecting multiview images to construct instance-oriented 3D environment semantic maps, especially in unknown indoor scenes. Although there are plenty of studies developing accurate object-level mapping systems with different types of cameras, these methods either process the instance segmentation problem in completed mapping or suffer from a critical real-time issue due to heavy computation processing required. In this paper, we propose a novel method to incrementally build instance-oriented 3D semantic maps directly from images acquired by the RGB-D camera. To ensure an efficient reconstruction of 3D objects with semantic and instance IDs, the input RGB images are operated by a real-time deep-learned object detector. To obtain accurate point cloud cluster, we adopt the Gaussian mixture model as an optimizer after processing 2D to 3D projection. Next, we present a data association strategy to update class probabilities across the frames. Finally, a map integration strategy fuses information about their 3D shapes, locations, and instance IDs in a faster way. We evaluate our system on different indoor scenes including offices, bedrooms, and living rooms from the SceneNN dataset, and the results show that our method not only builds the instance-oriented semantic map efficiently but also enhances the accuracy of the individual instance in the scene.


Author(s):  
Davide Menini ◽  
Suryansh Kumar ◽  
Martin R. Oswald ◽  
Erik Sandstrom ◽  
Cristian Sminchisescu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document