scholarly journals An Adaptive Augmented Vision-Based Ellipsoidal SLAM for Indoor Environments

Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2795
Author(s):  
Lahemer ◽  
Rad

In this paper, the problem of Simultaneous Localization And Mapping (SLAM) is addressed via a novel augmented landmark vision-based ellipsoidal SLAM. The algorithm is implemented on a NAO humanoid robot and is tested in an indoor environment. The main feature of the system is the implementation of SLAM with a monocular vision system. Distinguished landmarks referred to as NAOmarks are employed to localize the robot via its monocular vision system. We henceforth introduce the notion of robotic augmented reality (RAR) and present a monocular Extended Kalman Filter (EKF)/ellipsoidal SLAM in order to improve the performance and alleviate the computational effort, to provide landmark identification, and to simplify the data association problem. The proposed SLAM algorithm is implemented in real-time to further calibrate the ellipsoidal SLAM parameters, noise bounding, and to improve its overall accuracy. The augmented EKF/ellipsoidal SLAM algorithms are compared with the regular EKF/ellipsoidal SLAM methods and the merits of each algorithm is also discussed in the paper. The real-time experimental and simulation studies suggest that the adaptive augmented ellipsoidal SLAM is more accurate than the conventional EKF/ellipsoidal SLAMs.

Author(s):  
N. Botteghi ◽  
B. Sirmacek ◽  
R. Schulte ◽  
M. Poel ◽  
C. Brune

Abstract. In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Panlong Gu ◽  
Fengyu Zhou ◽  
Dianguo Yu ◽  
Fang Wan ◽  
Wei Wang ◽  
...  

RGBD camera-based VSLAM (Visual Simultaneous Localization and Mapping) algorithm is usually applied to assist robots with real-time mapping. However, due to the limited measuring principle, accuracy, and distance of the equipped camera, this algorithm has typical disadvantages in the large and dynamic scenes with complex lightings, such as poor mapping accuracy, easy loss of robot position, and much cost on computing resources. Regarding these issues, this paper proposes a new method of 3D interior construction, which combines laser radar and an RGBD camera. Meanwhile, it is developed based on the Cartographer laser SLAM algorithm. The proposed method mainly takes two steps. The first step is to do the 3D reconstruction using the Cartographer algorithm and RGBD camera. It firstly applies the Cartographer algorithm to calculate the pose of the RGBD camera and to generate a submap. Then, a real-time 3D point cloud generated by using the RGBD camera is inserted into the submap, and the real-time interior construction is finished. The second step is to improve Cartographer loop-closure quality by the visual loop-closure for the sake of correcting the generated map. Compared with traditional methods in large-scale indoor scenes, the proposed algorithm in this paper shows higher precision, faster speed, and stronger robustness in such contexts, especially with complex light and dynamic objects, respectively.


2015 ◽  
Vol 752-753 ◽  
pp. 1000-1005
Author(s):  
Li Kai Zhu ◽  
Dean Zhao ◽  
Wei Ji ◽  
Yu Chen

In view of the slowing expansion problem for the past harvesting robot’s electric push rod joint.This paper will adopt pneumatic draw stem instead of the original electric putter to improve the rapidity of servo system. Using VFW(Video for Windows) image acquisition system to access to the video buffer without generating the intermediate files,which can ensure high real-time performance. By monocular vision system to realize cylinder tracking control experiments for the fruit center position. Experiments verified the quickness and accuracy of the pneumatic servo positioning system.


Information ◽  
2020 ◽  
Vol 11 (10) ◽  
pp. 464
Author(s):  
Susanna Kaiser

In emergency scenarios, such as a terrorist attack or a building on fire, it is desirable to track first responders in order to coordinate the operation. Pedestrian tracking methods solely based on inertial measurement units in indoor environments are candidates for such operations since they do not depend on pre-installed infrastructure. A very powerful indoor navigation method represents collaborative simultaneous localization and mapping (collaborative SLAM), where the learned maps of several users can be combined in order to help indoor positioning. In this paper, maps are estimated from several similar trajectories (multiple users) or one user wearing multiple sensors. They are combined successively in order to obtain a precise map and positioning. For reducing complexity, the trajectories are divided into small portions (sliding window technique) and are partly successively applied to the collaborative SLAM algorithm. We investigate successive combinations of the map portions of several pedestrians and analyze the resulting position accuracy. The results depend on several parameters, e.g., the number of users or sensors, the sensor drifts, the amount of revisited area, the number of iterations, and the windows size. We provide a discussion about the choice of the parameters. The results show that the mean position error can be reduced to ≈0.5 m when applying partly successive collaborative SLAM.


2002 ◽  
Vol 21 (10-11) ◽  
pp. 829-848 ◽  
Author(s):  
Héctor H. González-Baños ◽  
Jean-Claude Latombe

In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds apolygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensor-placement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the next-best-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs.


Author(s):  
Hui Xiong ◽  
Youping Chen ◽  
Xiaoping Li ◽  
Bing Chen

PurposeBecause submaps including a subset of the global map contain more environmental information, submap-based graph simultaneous localization and mapping (SLAM) has been studied by many researchers. In most of those studies, helpful environmental information was not taken into consideration when designed the termination criterion of the submap construction process. After optimizing the graph, cumulative error within the submaps was also ignored. To address those problems, this paper aims to propose a two-level optimized graph-based SLAM algorithm.Design/methodology/approachSubmaps are updated by extended Kalman filter SLAM while no geometric-shaped landmark models are needed; raw laser scans are treated as landmarks. A more reasonable criterion called the uncertainty index is proposed to combine with the size of the submap to terminate the submap construction process. After a submap is completed and a loop closure is found, a two-level optimization process is performed to minimize the loop closure error and the accumulated error within the submaps.FindingsSimulation and experimental results indicate that the estimated error of the proposed algorithm is small, and the maps generated are consistent whether in global or local.Practical implicationsThe proposed method is robust to sparse pedestrians and can be adapted to most indoor environments.Originality/valueIn this paper, a two-level optimized graph-based SLAM algorithm is proposed.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3604 ◽  
Author(s):  
Peixin Liu ◽  
Xianfeng Yuan ◽  
Chengjin Zhang ◽  
Yong Song ◽  
Chuanzheng Liu ◽  
...  

To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation of the camera. Secondly, the Shi–Tomasi corners of the input sequence were tracked, and optimization equations were established using the pixel tracking of sparse direct visual odometry (VO). Thirdly, the Levenberg–Marquardt (L–M) method was applied to solve the joint optimization equation, and the photometric calibration parameters in the VO were updated to realize the real-time dynamic compensation of the exposure of the input sequences, which reduced the effects of the light variations on SLAM’s (simultaneous localization and mapping) accuracy and robustness. Finally, a Shi–Tomasi corner filtered strategy was designed to reduce the computational complexity of the proposed algorithm, and the loop closure detection was realized based on the oriented FAST and rotated BRIEF (ORB) features. The proposed algorithm was tested using TUM, KITTI, EuRoC, and an actual environment, and the experimental results show that the positioning and mapping performance of the proposed algorithm is promising.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Shuhuan Wen ◽  
Kamal Mohammed Othman ◽  
Ahmad B. Rad ◽  
Yixuan Zhang ◽  
Yongsheng Zhao

We present a SLAM with closed-loop controller method for navigation of NAO humanoid robot from Aldebaran. The method is based on the integration of laser and vision system. The camera is used to recognize the landmarks whereas the laser provides the information for simultaneous localization and mapping (SLAM ). K-means clustering method is implemented to extract data from different objects. In addition, the robot avoids the obstacles by the avoidance function. The closed-loop controller reduces the error between the real position and estimated position. Finally, simulation and experiments show that the proposed method is efficient and reliable for navigation in indoor environments.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Saúl Martínez-Díaz

Estimation of distance from objects in real-world scenes is an important topic in several applications such as navigation of autonomous robots, simultaneous localization and mapping (SLAM), and augmented reality (AR). Even though there is a technology for this purpose, in some cases, this technology has some disadvantages. For example, GPS systems are susceptible to interference, especially in places surrounded by buildings, under bridges or indoors; alternatively, RGBD sensors can be used, but they are expensive, and their operational range is limited. Monocular vision is a low-cost suitable alternative that can be used indoor and outdoor. However, monocular odometry is challenging because the object location can be known up a scale factor. Moreover, when objects are moving, it is necessary to estimate the location from consecutive images accumulating error. This paper introduces a new method to compute the distance from a single image of the desired object, with known dimensions, captured with a monocular calibrated vision system. This method is less restrictive than other proposals in the state-of-the-art literature. For the detection of interest points, a Region-based Convolutional Neural Network combined with a corner detector were used. The proposed method was tested on a standard dataset and images acquired by a low-cost and low-resolution webcam, under noncontrolled conditions. The system was tested and compared with a calibrated stereo vision system. Results showed the similar performance of both systems, but the monocular system accomplished the task in less time.


Sign in / Sign up

Export Citation Format

Share Document