scholarly journals Alternatives for Locating People Using Cameras and Embedded AI Accelerators: A Practical Approach

2021 ◽  
Vol 7 (1) ◽  
pp. 53
Author(s):  
Ángel Carro-Lagoa ◽  
Valentín Barral ◽  
Miguel González-López ◽  
Carlos J. Escudero ◽  
Luis Castedo

Indoor positioning systems usually rely on RF-based devices that should be carried by the targets, which is non-viable in certain use cases. Recent advances in AI have increased the reliability of person detection in images, thus, enabling the use of surveillance cameras to perform person localization and tracking. This paper evaluates the performance of indoor person location using cameras and edge devices with AI accelerators. We describe the video processing performed in each edge device, including the selected AI models and the post-processing of their outputs to obtain the positions of the detected persons and allow their tracking. The person location is based on pose estimation models as they provide better results than do object detection networks in occlusion situations. Experimental results are obtained with public datasets to show the feasibility of the solution.

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 887 ◽  
Author(s):  
Xunwei Tong ◽  
Ruifeng Li ◽  
Lianzheng Ge ◽  
Lijun Zhao ◽  
Ke Wang

Local patch-based methods of object detection and pose estimation are promising. However, to the best of the authors’ knowledge, traditional red-green-blue and depth (RGB-D) patches contain scene interference (foreground occlusion and background clutter) and have little rotation invariance. To solve these problems, a new edge patch is proposed and experimented with in this study. The edge patch is a local sampling RGB-D patch centered at the edge pixel of the depth image. According to the normal direction of the depth edge, the edge patch is sampled along a canonical orientation, making it rotation invariant. Through a process of depth detection, scene interference is eliminated from the edge patch, which improves the robustness. The framework of the edge patch-based method is described, and the method was evaluated on three public datasets. Compared with existing methods, the proposed method achieved a higher average F1-score (0.956) on the Tejani dataset and a better average detection rate (62%) on the Occlusion dataset, even in situations of serious scene interference. These results showed that the proposed method has higher detection accuracy and stronger robustness.


2021 ◽  
Author(s):  
Timon Hofer ◽  
Faranak Shamsafar ◽  
Nuri Benbarka ◽  
Andreas Zell

2021 ◽  
Author(s):  
Weiqian Guo ◽  
Rendong Ying ◽  
Peilin Liu ◽  
Weihang Wang

2021 ◽  
Author(s):  
Luis Gustavo Tomal Ribas ◽  
Marta Pereira Cocron ◽  
Joed Lopes Da Silva ◽  
Alessandro Zimmer ◽  
Thomas Brandmeier

Author(s):  
Jian Guan ◽  
Liming Yin ◽  
Jianguo Sun ◽  
Shuhan Qi ◽  
Xuan Wang ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Yizhong Yang ◽  
Qiang Zhang ◽  
Pengfei Wang ◽  
Xionglou Hu ◽  
Nengju Wu

Moving object detection in video streams is the first step of many computer vision applications. Background modeling and subtraction for moving detection is the most common technique for detecting, while how to detect moving objects correctly is still a challenge. Some methods initialize the background model at each pixel in the first N frames. However, it cannot perform well in dynamic background scenes since the background model only contains temporal features. Herein, a novel pixelwise and nonparametric moving object detection method is proposed, which contains both spatial and temporal features. The proposed method can accurately detect the dynamic background. Additionally, several new mechanisms are also proposed to maintain and update the background model. The experimental results based on image sequences in public datasets show that the proposed method provides the robustness and effectiveness in dynamic background scenes compared with the existing methods.


2021 ◽  
Author(s):  
Tomoya Yasunaga ◽  
Tetsuya Oda ◽  
Nobuki Saito ◽  
Aoto Hirata ◽  
Kyohei Toyoshima ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document