Robust people detection using depth information from an overhead Time-of-Flight camera

2017 ◽  
Vol 71 ◽  
pp. 240-256 ◽  
Author(s):  
Carlos A. Luna ◽  
Cristina Losada-Gutierrez ◽  
David Fuentes-Jimenez ◽  
Alvaro Fernandez-Rincon ◽  
Manuel Mazo ◽  
...  
2014 ◽  
Vol 75 (17) ◽  
pp. 10769-10786 ◽  
Author(s):  
Carsten Stahlschmidt ◽  
Alexandros Gavriilidis ◽  
Jörg Velten ◽  
Anton Kummert

Author(s):  
Daan Stellinga ◽  
David B. Phillips ◽  
Matthew Edgar ◽  
Sergey Turtaev ◽  
Tomáš Čižmár ◽  
...  

Author(s):  
Alvaro Fernandez-Rincon ◽  
David Fuentes-Jimenez ◽  
Cristina Losada-Gutierrez ◽  
Marta Marron-Romera ◽  
Carlos A. Luna ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1156
Author(s):  
Eu-Tteum Baek ◽  
Hyung-Jeong Yang ◽  
Soo-Hyung Kim ◽  
Gueesang Lee ◽  
Hieyong Jeong

A distance map captured using a time-of-flight (ToF) depth sensor has fundamental problems, such as ambiguous depth information in shiny or dark surfaces, optical noise, and mismatched boundaries. Severe depth errors exist in shiny and dark surfaces owing to excess reflection and excess absorption of light, respectively. Dealing with this problem has been a challenge due to the inherent hardware limitations of ToF, which measures the distance using the number of reflected photons. This study proposes a distance error correction method using three ToF sensors, set to different integration times to address the ambiguity in depth information. First, the three ToF depth sensors are installed horizontally at different integration times to capture distance maps at different integration times. Given the amplitude maps and error regions are estimated based on the amount of light, the estimated error regions are refined by exploiting the accurate depth information from the neighboring depth sensors that use different integration times. Moreover, we propose a new optical noise reduction filter that considers the distribution of the depth information biased toward one side. Experimental results verified that the proposed method overcomes the drawbacks of ToF cameras and provides enhanced distance maps.


Author(s):  
L. Hoegner ◽  
A. Hanel ◽  
M. Weinmann ◽  
B. Jutzi ◽  
S. Hinz ◽  
...  

Obtaining accurate 3d descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3d data from another sensor is able to overcome most of the limitations in the 3d geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras of Time-of-Flight (TOF) cameras is suitable. As a TOF camera is an active sensor in the near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications is independent from external illumination or textures in the scene. This article is focused on the fusion of data acquired both with a time-of-flight (TOF) camera and a thermal infrared (TIR) camera. As the radiometric behaviour of many objects differs between the near infrared used by the TOF camera and the thermal infrared spectrum, a direct co-registration with feature points in both intensity images leads to a high number of outliers. A fully automatic workflow of the geometric calibration of both cameras and the relative orientation of the camera system with one calibration pattern usable for both spectral bands is presented. Based on the relative orientation, a fusion of the TOF depth image and the TIR image is used for scene segmentation and people detection. An adaptive histogram based depth level segmentation of the 3d point cloud is combined with a thermal intensity based segmentation. The feasibility of the proposed method is demonstrated in an experimental setup with different geometric and radiometric influences that show the benefit of the combination of TOF intensity and depth images and thermal infrared images.


2018 ◽  
Vol 8 (11) ◽  
pp. 2017 ◽  
Author(s):  
Gyu-cheol Lee ◽  
Sang-ha Lee ◽  
Jisang Yoo

People counting in surveillance cameras is a key technology for understanding the flow population and generating heat maps. In recent years, people detection performance has been greatly improved with the development of object detection algorithms using deep learning. However, in places where people are crowded, the detection rate is low as people are often occluded by other people. We proposed a people-counting method using a stereo camera to resolve the non-detection problem due to the occlusion. We applied stereo matching to extract the depth image and convert the camera view to top view using depth information. People were detected using a height map and an occupancy map, and people were tracked and counted using a Kalman filter-based tracker. We operated the proposed method on the NVIDIA Jetson TX2 to check the real-time operation possibility on the embedded board. Experimental results showed that the proposed method had higher accuracy than the existing methods and that real-time processing is possible.


2015 ◽  
Vol 11 (7) ◽  
pp. 1329-1345 ◽  
Author(s):  
Tim Beyl ◽  
Philip Nicolai ◽  
Mirko D. Comparetti ◽  
Jörg Raczkowsky ◽  
Elena De Momi ◽  
...  

2021 ◽  
Vol 11 (22) ◽  
pp. 10913
Author(s):  
Kaiwen Guo ◽  
Tianqu Zhai ◽  
Elton Pashollari ◽  
Christopher J. Varlamos ◽  
Aymaan Ahmed ◽  
...  

This study describes a contactless vital sign monitoring (CVSM) system capable of measuring heart rate (HR) and respiration rate (RR) using a low-power, indirect time-of-flight (ToF) camera. The system takes advantage of both the active infrared illumination as well as the additional depth information from the ToF camera to compensate for the motion-induced artifacts during the HR measurements. The depth information captures how the user is moving with respect to the camera and, therefore, can be used to differentiate where the intensity change in the raw signal is from the underlying heartbeat or motion. Moreover, from the depth information, the system can acquire respiration rate by directly measuring the motion of the chest wall during breathing. We also conducted a pilot human study using this system with 29 participants of different demographics such as age, gender, and skin color. Our study shows that with depth-based motion compensation, the success rate (system measurement within 10% of reference) of HR measurements increases to 75%, as compared to 35% when motion compensation is not used. The mean HR deviation from the reference also drops from 21 BPM to −6.25 BPM when we apply the depth-based motion compensation. In terms of the RR measurement, our system shows a mean deviation of 1.7 BPM from the reference measurement. The pilot human study shows the system performance is independent of skin color but weakly dependent on gender and age.


Sign in / Sign up

Export Citation Format

Share Document