2A2-S05 A Method of 2D Environmental Map Generation and Localization using Visual Odometry and 3D-LIDAR(Localization and Mapping)

2014 ◽  
Vol 2014 (0) ◽  
pp. _2A2-S05_1-_2A2-S05_3 ◽  
Author(s):  
Akihito OHSATO ◽  
Satoshi KAGAMI ◽  
Hiroshi MIZOGUCHI
10.29007/x149 ◽  
2020 ◽  
Author(s):  
Noboru Takegami ◽  
Eiji Hayashi

We are developing an autonomous field robot to save labor in forest operation. About half of Japan's artificial forest area is already available as wood. However, trees are not harvested and forest resources are not effectively used, because the labor and costs are not sufficient. The employment rate of young people in forestry tends to decline, and the unmanaged forest area is expected to increase in the future. Therefore, in our laboratory we propose an autonomous field robot with all terrain vehicles (ATV) that focuses on the automation of work. The robot we are developing automates weeding and observation for all trees in the forest. In this research, we introduced Robot Operating System (ROS) developed in recent years to this robot. In addition, we observed trees by generating an environmental map in the forest using Simultaneous Localization and Mapping (SLAM).


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1742 ◽  
Author(s):  
Chuang Qian ◽  
Hongjuan Zhang ◽  
Jian Tang ◽  
Bijun Li ◽  
Hui Liu

An indoor map is a piece of infrastructure associated with location-based services. Simultaneous Localization and Mapping (SLAM)-based mobile mapping is an efficient method to construct an indoor map. This paper proposes an SLAM algorithm based on a laser scanner and an Inertial Measurement Unit (IMU) for 2D indoor mapping. A grid-based occupancy likelihood map is chosen as the map representation method and is built from all previous scans. Scan-to-map matching is utilized to find the optimal rigid-body transformation in order to avoid the accumulation of matching errors. Map generation and update are probabilistically motivated. According to the assumption that the orthogonal is the main feature of indoor environments, we propose a lightweight segment extraction method, based on the orthogonal blurred segments (OBS) method. Instead of calculating the parameters of segments, we give the scan points contained in blurred segments a greater weight during the construction of the grid-based occupancy likelihood map, which we call the orthogonal feature weighted occupancy likelihood map (OWOLM). The OWOLM enhances the occupancy likelihood map by fusing the orthogonal features. It can filter out noise scan points, produced by objects, such as glass cabinets and bookcases. Experiments were carried out in a library, which is a representative indoor environment, consisting of orthogonal features. The experimental result proves that, compared with the general occupancy likelihood map, the OWOLM can effectively reduce accumulated errors and construct a clearer indoor map.


2012 ◽  
Vol 2012 ◽  
pp. 1-26 ◽  
Author(s):  
Rodrigo Munguía ◽  
Antoni Grau

This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM) system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots ina prioriunknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom) monocular camera case (monocular SLAM) possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Ruwan Egodagamage ◽  
Mihran Tuceryan

Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.


2020 ◽  
Vol 12 (10) ◽  
pp. 1564 ◽  
Author(s):  
Kai-Wei Chiang ◽  
Guang-Je Tsai ◽  
Yu-Hua Li ◽  
You Li ◽  
Naser El-Sheimy

Automated driving has made considerable progress recently. The multisensor fusion system is a game changer in making self-driving cars possible. In the near future, multisensor fusion will be necessary to meet the high accuracy needs of automated driving systems. This paper proposes a multisensor fusion design, including an inertial navigation system (INS), a global navigation satellite system (GNSS), and light detection and ranging (LiDAR), to implement 3D simultaneous localization and mapping (INS/GNSS/3D LiDAR-SLAM). The proposed fusion structure enhances the conventional INS/GNSS/odometer by compensating for individual drawbacks such as INS-drift and error-contaminated GNSS. First, a highly integrated INS-aiding LiDAR-SLAM is presented to improve the performance and increase the robustness to adjust to varied environments using the reliable initial values from the INS. Second, the proposed fault detection exclusion (FDE) contributes SLAM to eliminate the failure solutions such as local solution or the divergence of algorithm. Third, the SLAM position velocity acceleration (PVA) model is used to deal with the high dynamic movement. Finally, an integrity assessment benefits the central fusion filter to avoid failure measurements into the update process based on the information from INS-aiding SLAM, which increases the reliability and accuracy. Consequently, our proposed multisensor design can deal with various situations such as long-term GNSS outage, deep urban areas, and highways. The results show that the proposed method can achieve an accuracy of under 1 meter in challenging scenarios, which has the potential to contribute the autonomous system.


Sign in / Sign up

Export Citation Format

Share Document