occupancy grid
Recently Published Documents


TOTAL DOCUMENTS

295
(FIVE YEARS 73)

H-INDEX

23
(FIVE YEARS 4)

2022 ◽  
Vol 163 ◽  
pp. 108151
Author(s):  
Morteza Tabatabaeipour ◽  
Oksana Trushkevych ◽  
Gordon Dobie ◽  
Rachel S. Edwards ◽  
Ross McMillan ◽  
...  

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 305
Author(s):  
Andres J. Barreto-Cubero ◽  
Alfonso Gómez-Espinosa ◽  
Jesús Arturo Escobedo Cabello ◽  
Enrique Cuan-Urquizo ◽  
Sergio R. Cruz-Ramírez

Mobile robots must be capable to obtain an accurate map of their surroundings to move within it. To detect different materials that might be undetectable to one sensor but not others it is necessary to construct at least a two-sensor fusion scheme. With this, it is possible to generate a 2D occupancy map in which glass obstacles are identified. An artificial neural network is used to fuse data from a tri-sensor (RealSense Stereo camera, 2D 360° LiDAR, and Ultrasonic Sensors) setup capable of detecting glass and other materials typically found in indoor environments that may or may not be visible to traditional 2D LiDAR sensors, hence the expression improved LiDAR. A preprocessing scheme is implemented to filter all the outliers, project a 3D pointcloud to a 2D plane and adjust distance data. With a Neural Network as a data fusion algorithm, we integrate all the information into a single, more accurate distance-to-obstacle reading to finally generate a 2D Occupancy Grid Map (OGM) that considers all sensors information. The Robotis Turtlebot3 Waffle Pi robot is used as the experimental platform to conduct experiments given the different fusion strategies. Test results show that with such a fusion algorithm, it is possible to detect glass and other obstacles with an estimated root-mean-square error (RMSE) of 3 cm with multiple fusion strategies.


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 155
Author(s):  
Sebastiano Chiodini ◽  
Marco Pertile ◽  
Stefano Debei

Obstacle mapping is a fundamental building block of the autonomous navigation pipeline of many robotic platforms such as planetary rovers. Nowadays, occupancy grid mapping is a widely used tool for obstacle perception. It foreseen the representation of the environment in evenly spaced cells, whose posterior probability of being occupied is updated based on range sensors measurement. In more classic approaches, the cells are updated to occupied at the point where the ray emitted by the range sensor encounters an obstacle, such as a wall. The main limitation of this kind of methods is that they are not able to identify planar obstacles, such as slippery, sandy, or rocky soils. In this work, we use the measurements of a stereo camera combined with a pixel labeling technique based on Convolution Neural Networks to identify the presence of rocky obstacles in planetary environment. Once identified, the obstacles are converted into a scan-like model. The estimation of the relative pose between successive frames is carried out using ORB-SLAM algorithm. The final step consists of updating the occupancy grid map using the Bayes’ update Rule. To evaluate the metrological performances of the proposed method images from the Martian analogous dataset, the ESA Katwijk Beach Planetary Rover Dataset have been used. The evaluation has been performed by comparing the generated occupancy map with a manually segmented ortomosaic map, obtained by drones’ survey of the area used as reference.


2021 ◽  
Author(s):  
Zhe Wang ◽  
Jingwei Ge ◽  
Xin Pei ◽  
Yi Zhang

2021 ◽  
Author(s):  
Alice Plebe ◽  
Julian F. P. Kooij ◽  
Gastone Pietro Rosati Papini ◽  
Mauro Da Lio

2021 ◽  
Author(s):  
Johann Laconte ◽  
Elie Randriamiarintsoa ◽  
Abderrahim Kasmi ◽  
Francois Pomerleau ◽  
Roland Chapuis ◽  
...  

Robotics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 110
Author(s):  
Daniel Dworakowski ◽  
Christopher Thompson ◽  
Michael Pham-Hung ◽  
Goldie Nejat

Grocery shoppers must negotiate cluttered, crowded, and complex store layouts containing a vast variety of products to make their intended purchases. This complexity may prevent even experienced shoppers from finding their grocery items, consuming a lot of their time and resulting in monetary loss for the store. To address these issues, we present a generic grocery robot architecture for the autonomous search and localization of products in crowded dynamic unknown grocery store environments using a unique context Simultaneous Localization and Mapping (contextSLAM) method. The contextSLAM method uniquely creates contextually rich maps through the online fusion of optical character recognition and occupancy grid information to locate products and aid in robot localization in an environment. The novelty of our robot architecture is in its ability to intelligently use geometric and contextual information within the context map to direct robot exploration in order to localize products in unknown environments in the presence of dynamic people. Extensive experiments were conducted with a mobile robot to validate the overall architecture and contextSLAM, including in a real grocery store. The results of the experiments showed that our architecture was capable of searching for and localizing all products in various grocery lists in different unknown environments.


Energies ◽  
2021 ◽  
Vol 14 (17) ◽  
pp. 5232
Author(s):  
Olivér Rákos ◽  
Tamás Bécsi ◽  
Szilárd Aradi ◽  
Péter Gáspár

Several problems can be encountered in the design of autonomous vehicles. Their software is organized into three main layers: perception, planning, and actuation. The planning layer deals with the sort and long-term situation prediction, which are crucial for intelligent vehicles. Whatever method is used to make forecasts, vehicles’ dynamic environment must be processed for accurate long-term forecasting. In the present article, a method is proposed to preprocess the dynamic environment in a freeway traffic situation. The method uses the structured data of surrounding vehicles and transforms it to an occupancy grid which a Convolutional Variational Autoencoder (CVAE) processes. The grids (2048 pixels) are compressed to a 64-dimensional latent vector by the encoder and reconstructed by the decoder. The output pixel intensities are interpreted as probabilities of the corresponding field is occupied by a vehicle. This method’s benefit is to preprocess the structured data of the dynamic environment and represent it in a lower-dimensional vector that can be used in any further tasks built on it. This representation is not handmade or heuristic but extracted from the database patterns in an unsupervised way.


Sign in / Sign up

Export Citation Format

Share Document