scholarly journals ROS integrated object detection for SLAM in unknown, low-visibility environments

2021 ◽  
Author(s):  
Benjamin Christie ◽  
Osama Ennasr ◽  
Garry Glaspell

Integrating thermal (or infrared) imagery on a robotics platform allows Unmanned Ground Vehicles (UGV) to function in low-visibility environments, such as pure darkness or low-density smoke. To maximize the effectiveness of this approach we discuss the modifications required to integrate our low-visibility object detection model on a Robot Operating System (ROS). Furthermore, we introduce a method for reporting detected objects while performing Simultaneous Localization and Mapping (SLAM) by generating bounding boxes and their respective transforms in visually challenging environments.

2020 ◽  
Vol 34 (07) ◽  
pp. 12557-12564 ◽  
Author(s):  
Zhenbo Xu ◽  
Wei Zhang ◽  
Xiaoqing Ye ◽  
Xiao Tan ◽  
Wei Yang ◽  
...  

3D object detection is an essential task in autonomous driving and robotics. Though great progress has been made, challenges remain in estimating 3D pose for distant and occluded objects. In this paper, we present a novel framework named ZoomNet for stereo imagery-based 3D detection. The pipeline of ZoomNet begins with an ordinary 2D object detection model which is used to obtain pairs of left-right bounding boxes. To further exploit the abundant texture cues in rgb images for more accurate disparity estimation, we introduce a conceptually straight-forward module – adaptive zooming, which simultaneously resizes 2D instance bounding boxes to a unified resolution and adjusts the camera intrinsic parameters accordingly. In this way, we are able to estimate higher-quality disparity maps from the resized box images then construct dense point clouds for both nearby and distant objects. Moreover, we introduce to learn part locations as complementary features to improve the resistance against occlusion and put forward the 3D fitting score to better estimate the 3D detection quality. Extensive experiments on the popular KITTI 3D detection dataset indicate ZoomNet surpasses all previous state-of-the-art methods by large margins (improved by 9.4% on APbv (IoU=0.7) over pseudo-LiDAR). Ablation study also demonstrates that our adaptive zooming strategy brings an improvement of over 10% on AP3d (IoU=0.7). In addition, since the official KITTI benchmark lacks fine-grained annotations like pixel-wise part locations, we also present our KFG dataset by augmenting KITTI with detailed instance-wise annotations including pixel-wise part location, pixel-wise disparity, etc.. Both the KFG dataset and our codes will be publicly available at https://github.com/detectRecog/ZoomNet.


SIMULATION ◽  
2017 ◽  
Vol 93 (9) ◽  
pp. 771-780 ◽  
Author(s):  
Erkan Uslu ◽  
Furkan Çakmak ◽  
Nihal Altuntaş ◽  
Salih Marangoz ◽  
Mehmet Fatih Amasyalı ◽  
...  

Robots are an important part of urban search and rescue tasks. World wide attention has been given to developing capable physical platforms that would be beneficial for rescue teams. It is evident that use of multi-robots increases the effectiveness of these systems. The Robot Operating System (ROS) is becoming a standard platform for the robotics research community for both physical robots and simulation environments. Gazebo, with connectivity to the ROS, is a three-dimensional simulation environment that is also becoming a standard. Several simultaneous localization and mapping algorithms are implemented in the ROS; however, there is no multi-robot mapping implementation. In this work, two multi-robot mapping algorithm implementations are presented, namely multi-robot gMapping and multi-robot Hector Mapping. The multi-robot implementations are tested in the Gazebo simulation environment. Also, in order to achieve a more realistic simulation, every incremental robot movement is modeled with rotational and translational noise.


Author(s):  
Addythia Saphala ◽  
Prianggada Indra Tanaya

Robotic Operation System (ROS) is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR), which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM).


Author(s):  
Akash Kumar, Dr. Amita Goel Prof. Vasudha Bahl and Prof. Nidhi Sengar

Object Detection is a study in the field of computer vision. An object detection model recognizes objects of the real world present either in a captured image or in real-time video where the object can belong to any class of objects namely humans, animals, objects, etc. This project is an implementation of an algorithm based on object detection called You Only Look Once (YOLO v3). The architecture of yolo model is extremely fast compared to all previous methods. Yolov3 model executes a single neural network to the given image and then divides the image into predetermined bounding boxes. These boxes are weighted by the predicted probabilities. After non max-suppression it gives the result of recognized objects together with bounding boxes. Yolo trains and directly executes object detection on full images.


Author(s):  
Bruno M. F. da Silva ◽  
Rodrigo S. Xavier ◽  
Luiz M. G. Gonçalves

Since it was proposed, the Robot Operating System (ROS) has fostered solutions for various problems in robotics in the form of ROS packages. One of these problems is Simultaneous Localization and Mapping (SLAM), a problem solved by computing the robot pose and a map of its environment of operation at the same time. The increasingly availability of robot kits ready to be programmed and also of RGB-D sensors often pose the question of which SLAM package should be used given the application requirements. When the SLAM subsystem must deliver estimates for robot navigation, as is the case of applications involving autonomous navigation, this question is even more relevant. This work introduces an experimental analysis of GMapping and RTAB-Map, two ROS compatible SLAM packages, regarding their SLAM accuracy, quality of produced maps and use of produced maps in navigation tasks. Our analysis aims ground robots equipped with RGB-D sensors for indoor environments and is supported by experiments conducted on datasets from simulation, benchmarks and from our own robot.


Agronomy ◽  
2019 ◽  
Vol 9 (7) ◽  
pp. 403 ◽  
Author(s):  
Naoum Tsolakis ◽  
Dimitrios Bechtsis ◽  
Dionysis Bochtis

This research aims to develop a farm management emulation tool that enables agrifood producers to effectively introduce advanced digital technologies, like intelligent and autonomous unmanned ground vehicles (UGVs), in real-world field operations. To that end, we first provide a critical taxonomy of studies investigating agricultural robotic systems with regard to: (i) the analysis approach, i.e., simulation, emulation, real-world implementation; (ii) farming operations; and (iii) the farming type. Our analysis demonstrates that simulation and emulation modelling have been extensively applied to study advanced agricultural machinery while the majority of the extant research efforts focuses on harvesting/picking/mowing and fertilizing/spraying activities; most studies consider a generic agricultural layout. Thereafter, we developed AgROS, an emulation tool based on the Robot Operating System, which could be used for assessing the efficiency of real-world robot systems in customized fields. The AgROS allows farmers to select their actual field from a map layout, import the landscape of the field, add characteristics of the actual agricultural layout (e.g., trees, static objects), select an agricultural robot from a predefined list of commercial systems, import the selected UGV into the emulation environment, and test the robot’s performance in a quasi-real-world environment. AgROS supports farmers in the ex-ante analysis and performance evaluation of robotized precision farming operations while lays the foundations for realizing “digital twins” in agriculture.


2021 ◽  
Author(s):  
Benjamin Christie ◽  
Osama Ennasr ◽  
Garry Glaspell

Unknown Environment Exploration (UEE) with an Unmanned Ground Vehicle (UGV) is extremely challenging. This report investigates a frontier exploration approach, in simulation, that leverages Simultaneous Localization And Mapping (SLAM) to efficiently explore unknown areas by finding navigable routes. The solution utilizes a diverse sensor payload that includes wheel encoders, three-dimensional (3-D) LIDAR, and Red, Green, Blue and Depth (RGBD) cameras. The main goal of this effort is to leverage frontier-based exploration with a UGV to produce a 3-D map (up to 10 cm resolution). The solution provided leverages the Robot Operating System (ROS).


2018 ◽  
Vol 7 (3.33) ◽  
pp. 28
Author(s):  
Asilbek Ganiev ◽  
Kang Hee Lee

In this paper, we used a robot operating system (ROS) that is designed to work with mobile robots. ROS provides us with simultaneous localization and mapping of the environment, and here it is used to autonomously navigate a mobile robot simulator between specified points. Also, when the mobile robot automatically navigates between the starting point and the target point, it bypasses obstacles; and if necessary, sets a new path of the route to reach the goal point.  


2018 ◽  
Author(s):  
Yi Chen ◽  
Sagar Manglani ◽  
Roberto Merco ◽  
Drew Bolduc

In this paper, we discuss several of major robot/vehicle platforms available and demonstrate the implementation of autonomous techniques on one such platform, the F1/10. Robot Operating System was chosen for its existing collection of software tools, libraries, and simulation environment. We build on the available information for the F1/10 vehicle and illustrate key tools that will help achieve properly functioning hardware. We provide methods to build algorithms and give examples of deploying these algorithms to complete autonomous driving tasks and build 2D maps using SLAM. Finally, we discuss the results of our findings and how they can be improved.


Sign in / Sign up

Export Citation Format

Share Document