The UMA-SAR Dataset: Multimodal data collection from a ground vehicle during outdoor disaster response training exercises

2021 ◽  
pp. 027836492110049
Author(s):  
Jesús Morales ◽  
Ricardo Vázquez-Martín ◽  
Anthony Mandow ◽  
David Morilla-Cabello ◽  
Alfonso García-Cerezo

This article presents a collection of multimodal raw data captured from a manned all-terrain vehicle in the course of two realistic outdoor search and rescue (SAR) exercises for actual emergency responders conducted in Málaga (Spain) in 2018 and 2019: the UMA-SAR dataset. The sensor suite, applicable to unmanned ground vehicles (UGVs), consisted of overlapping visible light (RGB) and thermal infrared (TIR) forward-looking monocular cameras, a Velodyne HDL-32 three-dimensional (3D) lidar, as well as an inertial measurement unit (IMU) and two global positioning system (GPS) receivers as ground truth. Our mission was to collect a wide range of data from the SAR domain, including persons, vehicles, debris, and SAR activity on unstructured terrain. In particular, four data sequences were collected following closed-loop routes during the exercises, with a total path length of 5.2 km and a total time of 77 min. In addition, we provide three more sequences of the empty site for comparison purposes (an extra 4.9 km and 46 min). Furthermore, the data is offered both in human-readable format and as rosbag files, and two specific software tools are provided for extracting and adapting this dataset to the users’ preference. The review of previously published disaster robotics repositories indicates that this dataset can contribute to fill a gap regarding visual and thermal datasets and can serve as a research tool for cross-cutting areas such as multispectral image fusion, machine learning for scene understanding, person and object detection, and localization and mapping in unstructured environments. The full dataset is publicly available at: www.uma.es/robotics-and-mechatronics/sar-datasets .

10.2196/21105 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e21105
Author(s):  
Arpita Mallikarjuna Kappattanavar ◽  
Nico Steckhan ◽  
Jan Philipp Sachs ◽  
Harry Freitas da Cruz ◽  
Erwin Böttinger ◽  
...  

Background A majority of employees in the industrial world spend most of their working time in a seated position. Monitoring sitting postures can provide insights into the underlying causes of occupational discomforts such as low back pain. Objective This study focuses on the technologies and algorithms used to classify sitting postures on a chair with respect to spine and limb movements. Methods A total of three electronic literature databases were surveyed to identify studies classifying sitting postures in adults. Quality appraisal was performed to extract critical details and assess biases in the shortlisted papers. Results A total of 14 papers were shortlisted from 952 papers obtained after a systematic search. The majority of the studies used pressure sensors to measure sitting postures, whereas neural networks were the most frequently used approaches for classification tasks in this context. Only 2 studies were performed in a free-living environment. Most studies presented ethical and methodological shortcomings. Moreover, the findings indicate that the strategic placement of sensors can lead to better performance and lower costs. Conclusions The included studies differed in various aspects of design and analysis. The majority of studies were rated as medium quality according to our assessment. Our study suggests that future work for posture classification can benefit from using inertial measurement unit sensors, since they make it possible to differentiate among spine movements and similar postures, considering transitional movements between postures, and using three-dimensional cameras to annotate the data for ground truth. Finally, comparing such studies is challenging, as there are no standard definitions of sitting postures that could be used for classification. In addition, this study identifies five basic sitting postures along with different combinations of limb and spine movements to help guide future research efforts.


2017 ◽  
Vol 36 (3) ◽  
pp. 269-273 ◽  
Author(s):  
András L Majdik ◽  
Charles Till ◽  
Davide Scaramuzza

This paper presents a dataset recorded on-board a camera-equipped micro aerial vehicle flying within the urban streets of Zurich, Switzerland, at low altitudes (i.e. 5–15 m above the ground). The 2 km dataset consists of time synchronized aerial high-resolution images, global position system and inertial measurement unit sensor data, ground-level street view images, and ground truth data. The dataset is ideal to evaluate and benchmark appearance-based localization, monocular visual odometry, simultaneous localization and mapping, and online three-dimensional reconstruction algorithms for micro aerial vehicles in urban environments.


Author(s):  
J. Gailis ◽  
A. Nüchter

The scan matching based simultaneous localization and mapping method with six dimensional poses is capable of creating a three dimensional point cloud map of the environment, as well as estimating the six dimensional path that the vehicle has travelled. The essence of it is the registering and matching of sequentially acquired 3D laser scans, while moving along a path, in a common coordinate frame in order to provide 6D pose estimations at the respective positions, as well as create a three dimensional map of the environment. An approach that could drastically improve the reliability of acquired data is to integrate available ground truth information. This paper is about implementing such functionality as a contribution to 6D SLAM (simultaneous localization and mapping with 6 DoF) in the 3DTK – The 3D Toolkit software (Nüchter and Lingemann, 2011), as well as test the functionality of the implementation using real world datasets.


2020 ◽  
Vol 39 (9) ◽  
pp. 1052-1060
Author(s):  
David Zuñiga-Noël ◽  
Alberto Jaenal ◽  
Ruben Gomez-Ojeda ◽  
Javier Gonzalez-Jimenez

This article presents a visual–inertial dataset gathered in indoor and outdoor scenarios with a handheld custom sensor rig, for over 80 min in total. The dataset contains hardware-synchronized data from a commercial stereo camera (Bumblebee®2), a custom stereo rig, and an inertial measurement unit. The most distinctive feature of this dataset is the strong presence of low-textured environments and scenes with dynamic illumination, which are recurrent corner cases of visual odometry and simultaneous localization and mapping (SLAM) methods. The dataset comprises 32 sequences and is provided with ground-truth poses at the beginning and the end of each of the sequences, thus allowing the accumulated drift to be measured in each case. We provide a trial evaluation of five existing state-of-the-art visual and visual–inertial methods on a subset of the dataset. We also make available open-source tools for evaluation purposes, as well as the intrinsic and extrinsic calibration parameters of all sensors in the rig. The dataset is available for download at http://mapir.uma.es/work/uma-visual-inertial-dataset


2011 ◽  
Vol 30 (13) ◽  
pp. 1543-1552 ◽  
Author(s):  
Gaurav Pandey ◽  
James R McBride ◽  
Ryan M Eustice

In this paper we describe a data set collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS-LV) and consumer (Xsens MTi-G) inertial measurement unit, a Velodyne three-dimensional lidar scanner, two push-broom forward-looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research Campus and downtown Dearborn, MI, during November–December 2009. The vehicle path trajectory in these data sets contains several large- and small-scale loop closures, which should be useful for testing various state-of-the-art computer vision and simultaneous localization and mapping algorithms.


Author(s):  
S. Karam ◽  
V. Lehtola ◽  
G. Vosselman

Abstract. In recent years, the importance of indoor mapping increased in a wide range of applications, such as facility management and mapping hazardous sites. The essential technique behind indoor mapping is simultaneous localization and mapping (SLAM) because SLAM offers suitable positioning estimates in environments where satellite positioning is not available. State-of-the-art indoor mobile mapping systems employ Visual-based SLAM or LiDAR-based SLAM. However, Visual-based SLAM is sensitive to textureless environments and, similarly, LiDAR-based SLAM is sensitive to a number of pose configurations where the geometry of laser observations is not strong enough to reliably estimate the six-degree-of-freedom (6DOF) pose of the system. In this paper, we present different strategies that utilize the benefits of the inertial measurement unit (IMU) in the pose estimation and support LiDAR-based SLAM in overcoming these problems. The proposed strategies have been implemented and tested using different datasets and our experimental results demonstrate that the proposed methods do indeed overcome these problems. We conclude that IMU observations increase the robustness of SLAM, which is expected, but also that the best reconstruction accuracy is obtained not with a blind use of all observations but by filtering the measurements with a proposed reliability measure. To this end, our results show promising improvements in reconstruction accuracy.


Robotica ◽  
2021 ◽  
pp. 1-17
Author(s):  
Guilherme M. Maciel ◽  
Milena F. Pinto ◽  
Ivo C. da S. Júnior ◽  
Fabricio O. Coelho ◽  
Andre L. M. Marcato ◽  
...  

Abstract Mobile robotic systems are used in a wide range of applications. Especially in the assistive field, they can enhance the mobility of the elderly and disable people. Modern robotic technologies have been implemented in wheelchairs to give them intelligence. Thus, by equipping wheelchairs with intelligent algorithms, controllers, and sensors, it is possible to share the wheelchair control between the user and the autonomous system. The present research proposes a methodology for intelligent wheelchairs based on head movements and vector fields. In this work, the user indicates where to go, and the system performs obstacle avoidance and planning. The focus is developing an assistive technology for people with quadriplegia that presents partial movements, such as the shoulder and neck musculature. The developed system uses shared control of velocity. It employs a depth camera to recognize obstacles in the environment and an inertial measurement unit (IMU) sensor to recognize the desired movement pattern measuring the user’s head inclination. The proposed methodology computes a repulsive vector field and works to increase maneuverability and safety. Thus, global localization and mapping are unnecessary. The results were evaluated by simulated models and practical tests using a Pioneer-P3DX differential robot to show the system’s applicability.


2021 ◽  
Vol 925 (1) ◽  
pp. 012054
Author(s):  
F Muhammad ◽  
Poerbandono ◽  
H Sternberg

Abstract Underwater vision-based mapping (VbM) constructs three-dimensional (3D) map and robot position simultaneously out of a quasi-continuous structure from motion (SfM) method. It is the so-called simultaneous localization and mapping (SLAM), which might be beneficial for mapping of shallow seabed features as it is free from unnecessary parasitic returns which is found in sonar survey. This paper presents a discussion resulted from a small-scale testing of 3D underwater positioning task. We analyse the setting and performance of a standard web-camera, used for such a task, while fully submerged underwater. SLAM estimates the robot (i.e. camera) position from the constructed 3D map by reprojecting the detected features (points) to the camera scene. A marker-based camera calibration is used to eliminate refractions effect due to light propagation in water column. To analyse the positioning accuracy, a fiducial marker-based system –with millimetres accuracy of reprojection error– is used as a trajectory’s true value (ground truth). Controlled experiment with a standard web-camera running with 30 fps (frame per-second) shows that such a system is capable to robustly performing underwater navigation task. Sub-metre accuracy is achieved utilizing at least 1 pose (1 Hz) every second.


2016 ◽  
Vol 36 (1) ◽  
pp. 3-15 ◽  
Author(s):  
Will Maddern ◽  
Geoffrey Pascoe ◽  
Chris Linegar ◽  
Paul Newman

We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on average using the Oxford RobotCar platform, an autonomous Nissan LEAF. This resulted in over 1000 km of recorded driving with almost 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including heavy rain, night, direct sunlight and snow. Road and building works over the period of a year significantly changed sections of the route from the beginning to the end of data collection. By frequently traversing the same route over the period of a year we enable research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments. The full dataset is available for download at: http://robotcar-dataset.robots.ox.ac.uk


2021 ◽  
pp. 027836492110447
Author(s):  
Kristopher Krasnosky ◽  
Christopher Roman ◽  
David Casagrande

In recent years, sonar systems for surface and underwater vehicles have increased in resolution and become significantly less expensive. As such, these systems are viable at a wide range of price points and are appropriate for a broad set of applications on surface and underwater vehicles. However, to take full advantage of these high-resolution sensors for seafloor mapping tasks an adequate navigation solution is also required. In GPS-denied environments this usually necessitates a simultaneous localization and mapping (SLAM) technique to maintain good accuracy with minimal error accumulation. Acoustic positioning systems such as ultra short baseline (USBL) and long baseline (LBL) are sometimes deployed to provide additional bounds on the navigation solution, but the positional uncertainty of these systems is often much greater than the resolution of modern multibeam or interferometric side scan sonars. As such, subsurface vehicles often lack the means to adequately ground-truth navigation solutions and the resulting bathymetic maps. In this article, we present a dataset with four separate surveys designed to test bathymetric SLAM algorithms using two modern sonars, typical underwater vehicle navigation sensors, and high-precision (2 cm horizontal, 10 cm vertical) real-time kinematic (RTK) GPS ground truth. In addition, these data can be used to refine and improve other aspects of multibeam sonar mapping such as ray-tracing, gridding techniques, and time-varying attitude corrections.


Sign in / Sign up

Export Citation Format

Share Document