scholarly journals ElasticFusion: Real-time dense SLAM and light source estimation

2016 ◽  
Vol 35 (14) ◽  
pp. 1697-1716 ◽  
Author(s):  
Thomas Whelan ◽  
Renato F Salas-Moreno ◽  
Ben Glocker ◽  
Andrew J Davison ◽  
Stefan Leutenegger

We present a novel approach to real-time dense visual simultaneous localisation and mapping. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incremental online fashion, without pose graph optimization or any post-processing steps. This is accomplished by using dense frame-to-model camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimizations as often as possible to stay close to the mode of the map distribution, while utilizing global loop closure to recover from arbitrary drift and maintain global consistency. In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novel approach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoor environments in real-time as a user handheld camera explores the scene. Absolutely no prior information about the scene or number of light sources is required. By making a small set of simple assumptions about the appearance properties of the scene our method can incrementally estimate both the quantity and location of multiple light sources in the environment in an online fashion. Our results demonstrate that our technique functions well in many different environments and lighting configurations. We show that this enables (a) more realistic augmented reality rendering; (b) a richer understanding of the scene beyond pure geometry and; (c) more accurate and robust photometric tracking.

2020 ◽  
Vol 2020 (11) ◽  
pp. 234-1-234-6
Author(s):  
Nicolai Behmann ◽  
Holger Blume

LED flicker artefacts, caused by unsynchronized irradiation from a pulse-width modulated LED light source captured by a digital camera sensor with discrete exposure times, place new requirements for both visual and machine vision systems. While latter need to capture relevant information from the light source only in a limited number of frames (e.g. a flickering traffic light), human vision is sensitive to illumination modulation in viewing applications, e.g. digital mirror replacement systems. In order to quantify flicker in viewing applications with KPIs related to human vision, we present a novel approach and results of a psychophysics study on the effect of LED flicker artefacts. Diverse real-world driving sequences have been captured with both mirror replacement cameras and a front viewing camera and potential flicker light sources have been masked manually. Synthetic flicker with adjustable parameters is then overlaid on these areas and the flickering sequences are presented to test persons in a driving environment. Feedback from the testers on flicker perception in different viewing areas, sizes and frequencies are collected and evaluated.


2018 ◽  
Vol 18 (3) ◽  
pp. 86-93 ◽  
Author(s):  
Morteza Daneshmand ◽  
Egils Avots ◽  
Gholamreza Anbarjafari

AbstractThis paper introduces a robust, real-time loop closure correction technique for achieving global consistency in 3D reconstruction, whose underlying notion is to back-propagate the cumulative transformation error appearing while merging the pairs of consecutive frames in a sequence of shots taken by an RGB-D or depth camera. The proposed algorithm assumes that the starting frame and the last frame of the sequence roughly overlap. In order to verify the robustness and reliability of the proposed method, namely, Proportional Error Back- Propagation (PEB), it has been applied to numerous case-studies, which encompass a wide range of experimental conditions, including different scanning trajectories with reversely directed motions within them, and the results are presented. The main contribution of the proposed algorithm is its considerably low computational cost which has the possibility of usage in real-time 3D reconstruction applications. Also, neither manual input nor interference is required from the user, which renders the whole process automatic.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 440 ◽  
Author(s):  
Jing Xin ◽  
Kaiyuan Gao ◽  
Mao Shan ◽  
Bo Yan ◽  
Ding Liu

Ultra-wideband (UWB) sensors have been widely used in multi-robot systems for cooperative tracking and positioning purposes due to their advantages such as high ranging accuracy and good real-time performance. In order to reduce the influence of non-line-of-sight (NLOS) UWB communication caused by the presence of obstacles on ranging accuracy in indoor environments, the paper proposes a novel Bayesian filtering approach for UWB ranging error mitigation. Nonparametric UWB sensor models, namely received signal strength (RSS) model and time of arrival (TOA) model, are constructed to capture the probabilistic noise characteristics under the influence of different obstruction conditions and materials within a typical indoor environment. The proposed Bayesian filtering approach can be used either as a standalone error mitigation approach for peer-to-peer (P2P) ranging, or as a part of a higher level Bayesian state estimation framework. Experiments were conducted to validate and evaluate the proposed approach in two configurations, i.e., inter-robot ranging, and mobile robot tracking in a wireless sensor network. The experimental results show that the proposed method can accurately identify the line-of-sight (LOS) and NLOS scenarios with wood and metal obstacles in a probabilistic representation and effectively improve the ranging/tracking accuracy. In addition, the low computational overhead of the approach makes it attractive in real-time systems.


2006 ◽  
Vol 5 (3) ◽  
pp. 1-7 ◽  
Author(s):  
Peter Supan ◽  
Ines Stuppacher ◽  
Michael Haller

This work presents an approach to render appropriate shadows with Image Based Lighting in Augmented Reality applications. To approximate the result of environment lighting and shadowing, the system uses a dome of shadow casting light sources. The color of each shadow is determined by the area of the environment behind the casting light source. As a result it is possible that changes in the lighting conditions immidiately affect the shadow casting of virtual objects on real objects


Author(s):  
Chen Zhang ◽  
Yu Hu

Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than simple moving average, which enables accurate reconstruction of high-frequency details such as sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D containing both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as opensource1 for other researchers to reproduce and verify our results.


2021 ◽  
Vol 6 (3) ◽  
pp. 6084-6091
Author(s):  
Michael Krawez ◽  
Tim Caselitz ◽  
Jugesh Sundram ◽  
Mark Van Loock ◽  
Wolfram Burgard

2019 ◽  
pp. 101-107
Author(s):  
Sergei A. Stakharny

This article is a review of the new light source – organic LEDs having prospects of application in general and special lighting systems. The article describes physical principles of operation of organic LEDs, their advantages and principal differences from conventional non-organic LEDs and other light sources. Also the article devoted to contemporary achievements and prospects of development of this field in the spheres of both general and museum lighting as well as other spheres where properties of organic LEDs as high-quality light sources may be extremely useful.


1997 ◽  
Vol 36 (8-9) ◽  
pp. 19-24 ◽  
Author(s):  
Richard Norreys ◽  
Ian Cluckie

Conventional UDS models are mechanistic which though appropriate for design purposes are less well suited to real-time control because they are slow running, difficult to calibrate, difficult to re-calibrate in real time and have trouble handling noisy data. At Salford University a novel hybrid of dynamic and empirical modelling has been developed, to combine the speed of the empirical model with the ability to simulate complex and non-linear systems of the mechanistic/dynamic models. This paper details the ‘knowledge acquisition module’ software and how it has been applied to construct a model of a large urban drainage system. The paper goes on to detail how the model has been linked with real-time radar data inputs from the MARS c-band radar.


Author(s):  
Abdallah Naser ◽  
Ahmad Lotfi ◽  
Joni Zhong

AbstractHuman distance estimation is essential in many vital applications, specifically, in human localisation-based systems, such as independent living for older adults applications, and making places safe through preventing the transmission of contagious diseases through social distancing alert systems. Previous approaches to estimate the distance between a reference sensing device and human subject relied on visual or high-resolution thermal cameras. However, regular visual cameras have serious concerns about people’s privacy in indoor environments, and high-resolution thermal cameras are costly. This paper proposes a novel approach to estimate the distance for indoor human-centred applications using a low-resolution thermal sensor array. The proposed system presents a discrete and adaptive sensor placement continuous distance estimators using classification techniques and artificial neural network, respectively. It also proposes a real-time distance-based field of view classification through a novel image-based feature. Besides, the paper proposes a transfer application to the proposed continuous distance estimator to measure human height. The proposed approach is evaluated in different indoor environments, sensor placements with different participants. This paper shows a median overall error of $$\pm 0.2$$ ± 0.2  m in continuous-based estimation and $$96.8\%$$ 96.8 % achieved-accuracy in discrete distance estimation.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Svenja Ipsen ◽  
Sven Böttger ◽  
Holger Schwegmann ◽  
Floris Ernst

AbstractUltrasound (US) imaging, in contrast to other image guidance techniques, offers the distinct advantage of providing volumetric image data in real-time (4D) without using ionizing radiation. The goal of this study was to perform the first quantitative comparison of three different 4D US systems with fast matrix array probes and real-time data streaming regarding their target tracking accuracy and system latency. Sinusoidal motion of varying amplitudes and frequencies was used to simulate breathing motion with a robotic arm and a static US phantom. US volumes and robot positions were acquired online and stored for retrospective analysis. A template matching approach was used for target localization in the US data. Target motion measured in US was compared to the reference trajectory performed by the robot to determine localization accuracy and system latency. Using the robotic setup, all investigated 4D US systems could detect a moving target with sub-millimeter accuracy. However, especially high system latency increased tracking errors substantially and should be compensated with prediction algorithms for respiratory motion compensation.


Sign in / Sign up

Export Citation Format

Share Document