scholarly journals Measurement Noise Model for Depth Camera-Based People Tracking

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4488
Author(s):  
Otto Korkalo ◽  
Tapio Takala

Depth cameras are widely used in people tracking applications. They typically suffer from significant range measurement noise, which causes uncertainty in the detections made of the people. The data fusion, state estimation and data association tasks require that the measurement uncertainty is modelled, especially in multi-sensor systems. Measurement noise models for different kinds of depth sensors have been proposed, however, the existing approaches require manual calibration procedures which can be impractical to conduct in real-life scenarios. In this paper, we present a new measurement noise model for depth camera-based people tracking. In our tracking solution, we utilise the so-called plan-view approach, where the 3D measurements are transformed to the floor plane, and the tracking problem is solved in 2D. We directly model the measurement noise in the plan-view domain, and the errors that originate from the imaging process and the geometric transformations of the 3D data are combined. We also present a method for directly defining the noise models from the observations. Together with our depth sensor network self-calibration routine, the approach allows fast and practical deployment of depth-based people tracking systems.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 464
Author(s):  
Filip B. Maciejewski ◽  
Flavio Baccari ◽  
Zoltán Zimborás ◽  
Michał Oszmaniec

Measurement noise is one of the main sources of errors in currently available quantum devices based on superconducting qubits. At the same time, the complexity of its characterization and mitigation often exhibits exponential scaling with the system size. In this work, we introduce a correlated measurement noise model that can be efficiently described and characterized, and which admits effective noise-mitigation on the level of marginal probability distributions. Noise mitigation can be performed up to some error for which we derive upper bounds. Characterization of the model is done efficiently using Diagonal Detector Overlapping Tomography – a generalization of the recently introduced Quantum Overlapping Tomography to the problem of reconstruction of readout noise with restricted locality. The procedure allows to characterize k-local measurement cross-talk on N-qubit device using O(k2klog(N)) circuits containing random combinations of X and identity gates. We perform experiments on 15 (23) qubits using IBM's (Rigetti's) devices to test both the noise model and the error-mitigation scheme, and obtain an average reduction of errors by a factor >22 (>5.5) compared to no mitigation. Interestingly, we find that correlations in the measurement noise do not correspond to the physical layout of the device. Furthermore, we study numerically the effects of readout noise on the performance of the Quantum Approximate Optimization Algorithm (QAOA). We observe in simulations that for numerous objective Hamiltonians, including random MAX-2-SAT instances and the Sherrington-Kirkpatrick model, the noise-mitigation improves the quality of the optimization. Finally, we provide arguments why in the course of QAOA optimization the estimates of the local energy (or cost) terms often behave like uncorrelated variables, which greatly reduces sampling complexity of the energy estimation compared to the pessimistic error analysis. We also show that similar effects are expected for Haar-random quantum states and states generated by shallow-depth random circuits.


Author(s):  
Patricia Everaere ◽  
Sebastien Konieczny ◽  
Pierre Marquis

We study how belief merging operators can be considered as maximum likelihood estimators, i.e., we assume that there exists a (unknown) true state of the world and that each agent participating in the merging process receives a noisy signal of it, characterized by a noise model. The objective is then to aggregate the agents' belief bases to make the best possible guess about the true state of the world. In this paper, some logical connections between the rationality postulates for belief merging (IC postulates) and simple conditions over the noise model under consideration are exhibited. These results provide a new justification for IC merging postulates. We also provide results for two specific natural noise models: the world swap noise and the atom swap noise, by identifying distance-based merging operators that are maximum likelihood estimators for these two noise models.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 393 ◽  
Author(s):  
Jonha Lee ◽  
Dong-Wook Kim ◽  
Chee Won ◽  
Seung-Won Jung

Segmentation of human bodies in images is useful for a variety of applications, including background substitution, human activity recognition, security, and video surveillance applications. However, human body segmentation has been a challenging problem, due to the complicated shape and motion of a non-rigid human body. Meanwhile, depth sensors with advanced pattern recognition algorithms provide human body skeletons in real time with reasonable accuracy. In this study, we propose an algorithm that projects the human body skeleton from a depth image to a color image, where the human body region is segmented in the color image by using the projected skeleton as a segmentation cue. Experimental results using the Kinect sensor demonstrate that the proposed method provides high quality segmentation results and outperforms the conventional methods.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 3008 ◽  
Author(s):  
Zhe Liu ◽  
Zhaozong Meng ◽  
Nan Gao ◽  
Zonghua Zhang

Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4581 ◽  
Author(s):  
Komagata ◽  
Kakinuma ◽  
Ishikawa ◽  
Shinoda ◽  
Kobayashi

With the aging of society, the number of fall accidents has increased in hospitals and care facilities, and some accidents have happened around beds. To help prevent accidents, mats and clip sensors have been used in these facilities but they can be invasive, and their purpose may be misinterpreted. In recent years, research has been conducted using an infrared-image depth sensor as a bed-monitoring system for detecting a patient getting up, exiting the bed, and/or falling; however, some manual calibration was required initially to set up the sensor in each instance. We propose a bed-monitoring system that retains the infrared-image depth sensors but uses semi-automatic rather than manual calibration in each situation where it is applied. Our automated methods robustly calculate the bed region, surrounding floor, sensor location, and attitude, and can recognize the spatial position of the patient even when the sensor is attached but unconstrained. Also, we propose a means to reconfigure the spatial position considering occlusion by parts of the bed and also accounting for the gravity center of the patient’s body. Experimental results of multi-view calibration and motion simulation showed that our methods were effective for recognition of the spatial position of the patient.


Entropy ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. 629 ◽  
Author(s):  
Shiguang Zhang ◽  
Ting Zhou ◽  
Lin Sun ◽  
Wei Wang ◽  
Baofang Chang

Due to the complexity of wind speed, it has been reported that mixed-noise models, constituted by multiple noise distributions, perform better than single-noise models. However, most existing regression models suppose that the noise distribution is single. Therefore, we study the Least square S V R of the Gaussian–Laplacian mixed homoscedastic ( G L M − L S S V R ) and heteroscedastic noise ( G L M H − L S S V R ) for complicated or unknown noise distributions. The ALM technique is used to solve model G L M − L S S V R . G L M − L S S V R is used to predict short-term wind speed with historical data. The prediction results indicate that the presented model is superior to the single-noise model, and has fine performance.


2020 ◽  
Vol 6 (3) ◽  
pp. 11
Author(s):  
Naoyuki Awano

Depth sensors are important in several fields to recognize real space. However, there are cases where most depth values in a depth image captured by a sensor are constrained because the depths of distal objects are not always captured. This often occurs when a low-cost depth sensor or structured-light depth sensor is used. This also occurs frequently in applications where depth sensors are used to replicate human vision, e.g., when using the sensors in head-mounted displays (HMDs). One ideal inpainting (repair or restoration) approach for depth images with large missing areas, such as partial foreground depths, is to inpaint only the foreground; however, conventional inpainting studies have attempted to inpaint entire images. Thus, under the assumption of an HMD-mounted depth sensor, we propose a method to inpaint partially and reconstruct an RGB-D depth image to preserve foreground shapes. The proposed method is comprised of a smoothing process for noise reduction, filling defects in the foreground area, and refining the filled depths. Experimental results demonstrate that the inpainted results produced using the proposed method preserve object shapes in the foreground area with accurate results of the inpainted area with respect to the real depth with the peak signal-to-noise ratio metric.


2010 ◽  
Vol 2 (2) ◽  
pp. 21-33 ◽  
Author(s):  
Irene Amerini ◽  
Roberto Caldelli ◽  
Vito Cappellini ◽  
Francesco Picchioni ◽  
Alessandro Piva

Identification of the source that has generated a digital content is considered one of the main open issues in multimedia forensics community. The extraction of photo-response non-uniformity (PRNU) noise has been so far indicated as a mean to identify sensor fingerprint. Such a fingerprint can be estimated from multiple images taken by the same camera by means of a de-noising filtering operation. In this paper, the authors propose a novel method for estimating the PRNU noise in source camera identification. In particular, a MMSE digital filter in the un-decimated wavelet domain, based on a signal-dependent noise model, is introduced and compared with others commonly adopted for this purpose. A theoretical framework and experimental results are provided and discussed.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
An Wen ◽  
Jinhao Meng ◽  
Jichang Peng ◽  
Lei Cai ◽  
Qian Xiao

Refined Instrumental Variable (RIV) estimation is applied to online identify the parameters of the Equivalent Circuit Model (ECM) for Lithium-ion (Li-ion) battery in this paper, which enables accurate parameters estimation with the measurement noise. Since the traditional Recursive Least Squares (RLS) estimation is extremely sensitive to the noise, the parameters in the ECM may fail to converge to their true values under the measurement noise. The RIV estimation is implemented in a bootstrap form, which alternates between the estimation in the system model and the noise model. The Box-Jenkins model of the Li-ion battery transformed from the two RC ECM is selected as the transfer function model for the RIV estimation in this paper. The errors of the two RC ECM are independently generated by the residual of high-order Auto Regressive (AR) model estimation. With the benefit of a series of auxiliary models, the data filtering technology can prefilter the measurement and increase the robustness of the parameters against the noise. Reasonable parameters are possible to be obtained regardless of the noise in the measurement by RIV. Simulation and experimental tests on a LiFePO4 battery validate the efficiency of RIV for parameter online identification compared with traditional RLS.


Sign in / Sign up

Export Citation Format

Share Document