Real-Time Outdoor Illumination Estimation for Camera Tracking in Indoor Environments

2021 ◽  
Vol 6 (3) ◽  
pp. 6084-6091
Author(s):  
Michael Krawez ◽  
Tim Caselitz ◽  
Jugesh Sundram ◽  
Mark Van Loock ◽  
Wolfram Burgard
2016 ◽  
Vol 35 (14) ◽  
pp. 1697-1716 ◽  
Author(s):  
Thomas Whelan ◽  
Renato F Salas-Moreno ◽  
Ben Glocker ◽  
Andrew J Davison ◽  
Stefan Leutenegger

We present a novel approach to real-time dense visual simultaneous localisation and mapping. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incremental online fashion, without pose graph optimization or any post-processing steps. This is accomplished by using dense frame-to-model camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimizations as often as possible to stay close to the mode of the map distribution, while utilizing global loop closure to recover from arbitrary drift and maintain global consistency. In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novel approach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoor environments in real-time as a user handheld camera explores the scene. Absolutely no prior information about the scene or number of light sources is required. By making a small set of simple assumptions about the appearance properties of the scene our method can incrementally estimate both the quantity and location of multiple light sources in the environment in an online fashion. Our results demonstrate that our technique functions well in many different environments and lighting configurations. We show that this enables (a) more realistic augmented reality rendering; (b) a richer understanding of the scene beyond pure geometry and; (c) more accurate and robust photometric tracking.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 642
Author(s):  
Luis Miguel González de Santos ◽  
Ernesto Frías Nores ◽  
Joaquín Martínez Sánchez ◽  
Higinio González Jorge

Nowadays, unmanned aerial vehicles (UAVs) are extensively used for multiple purposes, such as infrastructure inspections or surveillance. This paper presents a real-time path planning algorithm in indoor environments designed to perform contact inspection tasks using UAVs. The only input used by this algorithm is the point cloud of the building where the UAV is going to navigate. The algorithm is divided into two main parts. The first one is the pre-processing algorithm that processes the point cloud, segmenting it into rooms and discretizing each room. The second part is the path planning algorithm that has to be executed in real time. In this way, all the computational load is in the first step, which is pre-processed, making the path calculation algorithm faster. The method has been tested in different buildings, measuring the execution time for different paths calculations. As can be seen in the results section, the developed algorithm is able to calculate a new path in 8–9 milliseconds. The developed algorithm fulfils the execution time restrictions, and it has proven to be reliable for route calculation.


2021 ◽  
Vol 3 (5) ◽  
Author(s):  
João Gaspar Ramôa ◽  
Vasco Lopes ◽  
Luís A. Alexandre ◽  
S. Mogo

AbstractIn this paper, we propose three methods for door state classification with the goal to improve robot navigation in indoor spaces. These methods were also developed to be used in other areas and applications since they are not limited to door detection as other related works are. Our methods work offline, in low-powered computers as the Jetson Nano, in real-time with the ability to differentiate between open, closed and semi-open doors. We use the 3D object classification, PointNet, real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection algorithm, DetectNet and 2D object classification networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online. All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer. We conclude that it is possible to have a door classification algorithm running in real-time on a low-power device.


2017 ◽  
Vol 56 (3) ◽  
pp. 033104 ◽  
Author(s):  
Xingyin Fu ◽  
Feng Zhu ◽  
Feng Qi ◽  
Mingming Wang

Author(s):  
Fredy Martinez ◽  
Edwar Jacinto ◽  
Fernando Martinez

This paper presents a low cost strategy for real-time estimation of the position of obstacles in an unknown environment for autonomous robots. The strategy was intended for use in autonomous service robots, which navigate in unknown and dynamic indoor environments. In addition to human interaction, these environments are characterized by a design created for the human being, which is why our developments seek morphological and functional similarity equivalent to the human model. We use a pair of cameras on our robot to achieve a stereoscopic vision of the environment, and we analyze this information to determine the distance to obstacles using an algorithm that mimics bacterial behavior. The algorithm was evaluated on our robotic platform demonstrating high performance in the location of obstacles and real-time operation.


Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 954
Author(s):  
Abhijeet Ravankar ◽  
Ankit A. Ravankar ◽  
Arpit Rawankar ◽  
Yohei Hoshino

In recent years, autonomous robots have extensively been used to automate several vineyard tasks. Autonomous navigation is an indispensable component of such field robots. Autonomous and safe navigation has been well studied in indoor environments and many algorithms have been proposed. However, unlike structured indoor environments, vineyards pose special challenges for robot navigation. Particularly, safe robot navigation is crucial to avoid damaging the grapes. In this regard, we propose an algorithm that enables autonomous and safe robot navigation in vineyards. The proposed algorithm relies on data from a Lidar sensor and does not require a GPS. In addition, the proposed algorithm can avoid dynamic obstacles in the vineyard while smoothing the robot’s trajectories. The curvature of the trajectories can be controlled, keeping a safe distance from both the crop and the dynamic obstacles. We have tested the algorithm in both a simulation and with robots in an actual vineyard. The results show that the robot can safely navigate the lanes of the vineyard and smoothly avoid dynamic obstacles such as moving people without abruptly stopping or executing sharp turns. The algorithm performs in real-time and can easily be integrated into robots deployed in vineyards.


2015 ◽  
Vol 8 (4) ◽  
pp. 405-414 ◽  
Author(s):  
M. Amirul Islam Khan ◽  
Nicolas Delbosc ◽  
Catherine J. Noakes ◽  
Jonathan Summers

Sign in / Sign up

Export Citation Format

Share Document