A Proposal of Emergency Rescue Location (ERL) using Optimization of Inertial Measurement Unit (IMU) based Pedestrian Simultaneously Localization and Mapping (SLAM)

2015 ◽  
Vol 9 (12) ◽  
pp. 9-22 ◽  
Author(s):  
Wan Mohd Yaakob Wan Bejuri ◽  
Mohd Murtadha Mohamad ◽  
Raja Zahilah Raja Mohd Radzi
2019 ◽  
Vol 38 (14) ◽  
pp. 1549-1559 ◽  
Author(s):  
Maxime Ferrera ◽  
Vincent Creuze ◽  
Julien Moras ◽  
Pauline Trouvé-Peloux

We present a new dataset, dedicated to the development of simultaneous localization and mapping methods for underwater vehicles navigating close to the seabed. The data sequences composing this dataset are recorded in three different environments: a harbor at a depth of a few meters, a first archeological site at a depth of 270 meters, and a second site at a depth of 380 meters. The data acquisition is performed using remotely operated vehicles equipped with a monocular monochromatic camera, a low-cost inertial measurement unit, a pressure sensor, and a computing unit, all embedded in a single enclosure. The sensors’ measurements are recorded synchronously on the computing unit and 17 sequences have been created from all the acquired data. These sequences are made available in the form of ROS bags and as raw data. For each sequence, a trajectory has also been computed offline using a structure-from-motion library in order to allow the comparison with real-time localization methods. With the release of this dataset, we wish to provide data difficult to acquire and to encourage the development of vision-based localization methods dedicated to the underwater environment. The dataset can be downloaded from: http://www.lirmm.fr/aqualoc/


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142199992
Author(s):  
Ping Jiang ◽  
Liang Chen ◽  
Hang Guo ◽  
Min Yu ◽  
Jian Xiong

As an important research field of mobile robot, simultaneous localization and mapping technology is the core technology to realize intelligent autonomous mobile robot. Aiming at the problems of low positioning accuracy of Lidar (light detection and ranging) simultaneous localization and mapping with nonlinear and non-Gaussian noise characteristics, this article presents a mobile robot simultaneous localization and mapping method that combines Lidar and inertial measurement unit to set up a multi-sensor integrated system and uses a rank Kalman filtering to estimate the robot motion trajectory through inertial measurement unit and Lidar observations. Rank Kalman filtering is similar to the Gaussian deterministic point sampling filtering algorithm in structure, but it does not need to meet the assumptions of Gaussian distribution. It completely calculates the sampling points and the sampling points weights based on the correlation principle of rank statistics. It is suitable for nonlinear and non-Gaussian systems. With multiple experimental tests of small-scale arc trajectories, we can see that compared with the alone Lidar simultaneous localization and mapping algorithm, the new algorithm reduces the mean error of the indoor mobile robot in the X direction from 0.0928 m to 0.0451 m, with an improved accuracy rate of 46.39%, and the mean error in the Y direction from 0.0772 m to 0.0405 m, which improves the accuracy rate of 48.40%. Compared with the extended Kalman filter fusion algorithm, the new algorithm reduces the mean error of the indoor mobile robot in the X direction from 0.0597 m to 0.0451 m, with an improved accuracy rate of 24.46%, and the mean error in the Y direction from 0.0537 m to 0.0405 m, which improves the accuracy rate of 24.58%. Finally, we also tested on a large-scale rectangular trajectory, compared with the extended Kalman filter algorithm, rank Kalman filtering improves the accuracy of 23.84% and 25.26% in the X and Y directions, respectively, it is verified that the accuracy of the algorithm proposed in this article has been improved.


2020 ◽  
Vol 39 (9) ◽  
pp. 1052-1060
Author(s):  
David Zuñiga-Noël ◽  
Alberto Jaenal ◽  
Ruben Gomez-Ojeda ◽  
Javier Gonzalez-Jimenez

This article presents a visual–inertial dataset gathered in indoor and outdoor scenarios with a handheld custom sensor rig, for over 80 min in total. The dataset contains hardware-synchronized data from a commercial stereo camera (Bumblebee®2), a custom stereo rig, and an inertial measurement unit. The most distinctive feature of this dataset is the strong presence of low-textured environments and scenes with dynamic illumination, which are recurrent corner cases of visual odometry and simultaneous localization and mapping (SLAM) methods. The dataset comprises 32 sequences and is provided with ground-truth poses at the beginning and the end of each of the sequences, thus allowing the accumulated drift to be measured in each case. We provide a trial evaluation of five existing state-of-the-art visual and visual–inertial methods on a subset of the dataset. We also make available open-source tools for evaluation purposes, as well as the intrinsic and extrinsic calibration parameters of all sensors in the rig. The dataset is available for download at http://mapir.uma.es/work/uma-visual-inertial-dataset


Author(s):  
Fahad Kamran ◽  
Kathryn Harrold ◽  
Jonathan Zwier ◽  
Wendy Carender ◽  
Tian Bao ◽  
...  

Abstract Background Recently, machine learning techniques have been applied to data collected from inertial measurement units to automatically assess balance, but rely on hand-engineered features. We explore the utility of machine learning to automatically extract important features from inertial measurement unit data for balance assessment. Findings Ten participants with balance concerns performed multiple balance exercises in a laboratory setting while wearing an inertial measurement unit on their lower back. Physical therapists watched video recordings of participants performing the exercises and rated balance on a 5-point scale. We trained machine learning models using different representations of the unprocessed inertial measurement unit data to estimate physical therapist ratings. On a held-out test set, we compared these learned models to one another, to participants’ self-assessments of balance, and to models trained using hand-engineered features. Utilizing the unprocessed kinematic data from the inertial measurement unit provided significant improvements over both self-assessments and models using hand-engineered features (AUROC of 0.806 vs. 0.768, 0.665). Conclusions Unprocessed data from an inertial measurement unit used as input to a machine learning model produced accurate estimates of balance performance. The ability to learn from unprocessed data presents a potentially generalizable approach for assessing balance without the need for labor-intensive feature engineering, while maintaining comparable model performance.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4767
Author(s):  
Karla Miriam Reyes Leiva ◽  
Milagros Jaén-Vargas ◽  
Benito Codina ◽  
José Javier Serrano Olmedo

A diverse array of assistive technologies have been developed to help Visually Impaired People (VIP) face many basic daily autonomy challenges. Inertial measurement unit sensors, on the other hand, have been used for navigation, guidance, and localization but especially for full body motion tracking due to their low cost and miniaturization, which have allowed the estimation of kinematic parameters and biomechanical analysis for different field of applications. The aim of this work was to present a comprehensive approach of assistive technologies for VIP that include inertial sensors as input, producing results on the comprehension of technical characteristics of the inertial sensors, the methodologies applied, and their specific role in each developed system. The results show that there are just a few inertial sensor-based systems. However, these sensors provide essential information when combined with optical sensors and radio signals for navigation and special application fields. The discussion includes new avenues of research, missing elements, and usability analysis, since a limitation evidenced in the selected articles is the lack of user-centered designs. Finally, regarding application fields, it has been highlighted that a gap exists in the literature regarding aids for rehabilitation and biomechanical analysis of VIP. Most of the findings are focused on navigation and obstacle detection, and this should be considered for future applications.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2246
Author(s):  
Scott Pardoel ◽  
Gaurav Shalin ◽  
Julie Nantel ◽  
Edward D. Lemaire ◽  
Jonathan Kofman

Freezing of gait (FOG) is a sudden and highly disruptive gait dysfunction that appears in mid to late-stage Parkinson’s disease (PD) and can lead to falling and injury. A system that predicts freezing before it occurs or detects freezing immediately after onset would generate an opportunity for FOG prevention or mitigation and thus enhance safe mobility and quality of life. This research used accelerometer, gyroscope, and plantar pressure sensors to extract 861 features from walking data collected from 11 people with FOG. Minimum-redundancy maximum-relevance and Relief-F feature selection were performed prior to training boosted ensembles of decision trees. The binary classification models identified Total-FOG or No FOG states, wherein the Total-FOG class included data windows from 2 s before the FOG onset until the end of the FOG episode. Three feature sets were compared: plantar pressure, inertial measurement unit (IMU), and both plantar pressure and IMU features. The plantar-pressure-only model had the greatest sensitivity and the IMU-only model had the greatest specificity. The best overall model used the combination of plantar pressure and IMU features, achieving 76.4% sensitivity and 86.2% specificity. Next, the Total-FOG class components were evaluated individually (i.e., Pre-FOG windows, Freeze windows, transition windows between Pre-FOG and Freeze). The best model detected windows that contained both Pre-FOG and FOG data with 85.2% sensitivity, which is equivalent to detecting FOG less than 1 s after the freeze began. Windows of FOG data were detected with 93.4% sensitivity. The IMU and plantar pressure feature-based model slightly outperformed models that used data from a single sensor type. The model achieved early detection by identifying the transition from Pre-FOG to FOG while maintaining excellent FOG detection performance (93.4% sensitivity). Therefore, if used as part of an intelligent, real-time FOG identification and cueing system, even if the Pre-FOG state were missed, the model would perform well as a freeze detection and cueing system that could improve the mobility and independence of people with PD during their daily activities.


Sign in / Sign up

Export Citation Format

Share Document