Construction and calibration of a low-cost 3D laser scanner with 360° field of view for mobile robots

Author(s):  
Jorge L. Martinez ◽  
Jesus Morales ◽  
Antonio J. Reina ◽  
Anthony Mandow ◽  
Alejandro Pequeno-Boter ◽  
...  
2013 ◽  
Vol 572 ◽  
pp. 644-647
Author(s):  
Gökhan Aslan ◽  
Erhan Ilhan Konukseven ◽  
Buğra Koku

In an efficient autonomous navigation and exploration, the robots should sense the environment as exactly as possible in real-time and act correctly on the basis of the acquired 3D data. Laser scanners have been used for the last 30 years for mobile robot navigation. However, they often did not enough speed, accuracy and field of view. In this paper we present the design and implementation of a scanning platform, which can be used for both outdoor and indoor mobile robot navigation and mapping. A 3D scanning platform based on a 2D laser rangefinder was designed in compact way for fast and accurate mapping with maximum field of view. The range finder is rotated around the vertical axis to extract the 3D indoor information. However, the scanner is designed to be placed in any direction on a mobile robot. The designed mechanism provides 360º degree horizontal by 240º degree vertical field of view. The maximum resolution is 0.36º degrees in elevation and variable in azimuth (0.1 degrees if scanning platform is set to complete a 360º degree rotation in 3.6 seconds). The proposed low cost compact design is tested by scanning a physical environment with known dimensions to show that it can be used as a precise and reliable high quality 3D sensor for autonomous mobile robots.


Author(s):  
C. Chen ◽  
B. S. Yang ◽  
S. Song

Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.


Author(s):  
C. Chen ◽  
B. S. Yang ◽  
S. Song

Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.


Author(s):  
Karthik C* Valliappan ◽  
Vikram R

An autonomous navigation system for a robot is key for it to be self-reliant in any given environment. Precise navigation and localization of robots will minimize the need for guided work areas specifically designed for the utilization of robots. The existing solution for autonomous navigation is very expensive restricting its implementation to satisfy a wide variety of applications for robots. This project aims to develop a low-cost methodology for complete autonomous navigation and localization of the robot. For localization, the robot is equipped with an image sensor that captures reference points in its field of view. When the robot moves, the change in robot position is estimated by calculating the shift in the location of the initially captured reference point. Using the onboard proximity sensors, the robot generates a map of all the accessible areas in its domain which is then used for generating a path to the desired location. The robot uses the generated path to navigate while simultaneously avoiding any obstacles in its path to arrive at the desired location.


Robotica ◽  
2021 ◽  
pp. 1-18
Author(s):  
Majid Yekkehfallah ◽  
Ming Yang ◽  
Zhiao Cai ◽  
Liang Li ◽  
Chuanxiang Wang

SUMMARY Localization based on visual natural landmarks is one of the state-of-the-art localization methods for automated vehicles that is, however, limited in fast motion and low-texture environments, which can lead to failure. This paper proposes an approach to solve these limitations with an extended Kalman filter (EKF) based on a state estimation algorithm that fuses information from a low-cost MEMS Inertial Measurement Unit and a Time-of-Flight camera. We demonstrate our results in an indoor environment. We show that the proposed approach does not require any global reflective landmark for localization and is fast, accurate, and easy to use with mobile robots.


2020 ◽  
Vol 6 (3) ◽  
pp. 522-525
Author(s):  
Dorina Hasselbeck ◽  
Max B. Schäfer ◽  
Kent W. Stewart ◽  
Peter P. Pott

AbstractMicroscopy enables fast and effective diagnostics. However, in resource-limited regions microscopy is not accessible to everyone. Smartphone-based low-cost microscopes could be a powerful tool for diagnostic and educational purposes. In this paper, the imaging quality of a smartphone-based microscope with four different optical parameters is presented and a systematic overview of the resulting diagnostic applications is given. With the chosen configuration, aiming for a reasonable trade-off, an average resolution of 1.23 μm and a field of view of 1.12 mm2 was achieved. This enables a wide range of diagnostic applications such as the diagnosis of Malaria and other parasitic diseases.


2019 ◽  
Vol 16 (4) ◽  
pp. 172988141986038
Author(s):  
Huang Yiqing ◽  
Wang Hui ◽  
Wei Lisheng ◽  
Gao Wengen ◽  
Ge Yuan

This article presented a cooperative mapping technique using a novel edge gradient algorithm for multiple mobile robots. The proposed edge gradient algorithm can be divided into four behaviors such as adjusting the movement direction, evaluating the safety of motion behavior, following behavior, and obstacle information exchange, which can effectively prevent multiple mobile robots falling into concave obstacle areas. Meanwhile, a visual field factor is constructed based on biological principles so that the mobile robots can have a larger field of view when moving away from obstacles. Also, the visual field factor will be narrowed due to the obstruction of the obstacle when approaching an obstacle and the obtained map-building data are more accurate. Finally, three sets of simulation and experimental results demonstrate the performance superiority of the presented algorithm.


Sensors ◽  
2012 ◽  
Vol 12 (7) ◽  
pp. 9046-9054 ◽  
Author(s):  
María-Eugenia Polo ◽  
Ángel M. Felicísimo
Keyword(s):  
Low Cost ◽  

Sign in / Sign up

Export Citation Format

Share Document