scholarly journals Mobile Robot Indoor Positioning Based on a Combination of Visual and Inertial Sensors

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1773 ◽  
Author(s):  
Mingjing Gao ◽  
Min Yu ◽  
Hang Guo ◽  
Yuan Xu

Multi-sensor integrated navigation technology has been applied to the indoor navigation and positioning of robots. For the problems of a low navigation accuracy and error accumulation, for mobile robots with a single sensor, an indoor mobile robot positioning method based on a visual and inertial sensor combination is presented in this paper. First, the visual sensor (Kinect) is used to obtain the color image and the depth image, and feature matching is performed by the improved scale-invariant feature transform (SIFT) algorithm. Then, the absolute orientation algorithm is used to calculate the rotation matrix and translation vector of a robot in two consecutive frames of images. An inertial measurement unit (IMU) has the advantages of high frequency updating and rapid, accurate positioning, and can compensate for the Kinect speed and lack of precision. Three-dimensional data, such as acceleration, angular velocity, magnetic field strength, and temperature data, can be obtained in real-time with an IMU. The data obtained by the visual sensor is loosely combined with that obtained by the IMU, that is, the differences in the positions and attitudes of the two sensor outputs are optimally combined by the adaptive fade-out extended Kalman filter to estimate the errors. Finally, several experiments show that this method can significantly improve the accuracy of the indoor positioning of the mobile robots based on the visual and inertial sensors.

Author(s):  
Mehdi Dehghani ◽  
Hamed Kharrati ◽  
Hadi Seyedarabi ◽  
Mahdi Baradarannia

The accumulated error and noise sensitivity are the two common problems of ordinary inertial sensors. An accurate gyroscope is too expensive, which is not normally applicable in low-cost missions of mobile robots. Since the accelerometers are rather cheaper than similar types of gyroscopes, using redundant accelerometers could be considered as an alternative. This mechanism is called gyroscope-free navigation. The article deals with autonomous mobile robot (AMR) navigation based on gyroscope-free method. In this research, the navigation errors of the gyroscope-free method in long-time missions are demonstrated. To compensate the position error, the aid information of low-cost stereo cameras and a topological map of the workspace are employed in the navigation system. After precise sensor calibration, an amendment algorithm is presented to fuse the measurement of gyroscope-free inertial measurement unit (GFIMU) and stereo camera observations. The advantages and comparisons of vision aid navigation and gyroscope-free navigation of mobile robots will be also discussed. The experimental results show the increasing accuracy in vision-aid navigation of mobile robot.


2013 ◽  
Vol 117 (1188) ◽  
pp. 111-132 ◽  
Author(s):  
T. L. Grigorie ◽  
R. M. Botez

Abstract This paper presents a new adaptive algorithm for the statistical filtering of miniaturised inertial sensor noise. The algorithm uses the minimum variance method to perform a best estimate calculation of the accelerations or angular speeds on each of the three axes of an Inertial Measurement Unit (IMU) by using the information from some accelerometers and gyros arrays placed along the IMU axes. Also, the proposed algorithm allows the reduction of both components of the sensors’ noise (long term and short term) by using redundant linear configurations for the sensors dispositions. A numerical simulation is performed to illustrate how the algorithm works, using an accelerometer sensor model and a four-sensor array (unbiased and with different noise densities). Three cases of ideal input acceleration are considered: 1) a null signal; 2) a step signal with a no-null time step; and 3) a low frequency sinusoidal signal. To experimentally validate the proposed algorithm, some bench tests are performed. In this way, two sensors configurations are used: 1) one accelerometers array with four miniaturised sensors (n = 4); and 2) one accelerometers array with nine miniaturised sensors (n = 9). Each of the two configurations are tested for three cases of input accelerations: 0ms−1, 9·80655m/s2 and 9·80655m/s2.


1999 ◽  
Vol 11 (1) ◽  
pp. 1-1
Author(s):  
Kiyoshi Komoriya ◽  

Mobility, or locomotion, is as important a function for robots as manipulation. A robot can enlarge its work space by locomotion. It can also recognize its environment well with its sensors by moving around and by observing its surroundings from various directions. Much researches has been done on mobile robots and the research appears to be mature. Research activity on robot mobility is still very active; for example, 22% of the sessions at ICRA'98 - the International Conference on Robotics and Automation - and 24% of the sessions at IROS'98 - the International Conference on Intelligent Robots and Systems - dealt with issues directly related to mobile robots. One of the main reasons may be that intelligent mobile robots are thought to be the closest position to autonomous robot applications. This special issue focuses on a variety of mobile robot research from mobile mechanisms, localization, and navigation to remote control through networks. The first paper, entitled ""Control of an Omnidirectional Vehicle with Multiple Modular Steerable Drive Wheels,"" by M. Hashimoto et al., deals with locomotion mechanisms. They propose an omnidirectional mobile mechanism consisting of modular steerable drive wheels. The omnidirectional function of mobile mechanisms will be an important part of the human-friendly robot in the near future to realize flexible movements in indoor environments. The next three papers focus on audiovisual sensing to localize and navigate a robot. The second paper, entitled ""High-Speed Measurement of Normal Wall Direction by Ultrasonic Sensor,"" by A. Ohya et al., proposes a method to measure the normal direction of walls by ultrasonic array sensor. The third paper, entitled ""Self-Position Detection System Using a Visual-Sensor for Mobile Robots,"" is written by T. Tanaka et al. In their method, the position of the robot is decided by measuring marks such as name plates and fire alarm lamps by visual sensor. In the fourth paper, entitled ""Development of Ultra-Wide-Angle Laser Range Sensor and Navigation of a Mobile Robot in a Corridor Environment,"" written by Y Ando et al., a very wide view-angle sensor is realized using 5 laser fan beam projectors and 3 CCD cameras. The next three papers discussing navigation problems. The fifth paper, entitled ""Autonomous Navigation of an Intelligent Vehicle Using 1-Dimensional Optical Flow,"" by M. Yamada and K. Nakazawa, discusses navigation based on visual feedback. In this work, navigation is realized by general and qualitative knowledge of the environment. The sixth paper, entitled ""Development of Sensor-Based Navigation for Mobile Robots Using Target Direction Sensor,"" by M. Yamamoto et al., proposes a new sensor-based navigation algorithm in an unknown obstacle environment. The seventh paper, entitled ""Navigation Based on Vision and DGPS Information for Mobile Robots,"" S. Kotani et al., describes a navigation system for an autonomous mobile robot in an outdoor environment. The unique point of their paper is the utilization of landmarks and a differential global positioning system to determine robot position and orientation. The last paper deals with the relationship between the mobile robot and computer networks. The paper, entitled ""Direct Mobile Robot Teleoperation via Internet,"" by K. Kawabata et al., proposes direct teleoperation of a mobile robot via the Internet. Such network-based robotics will be an important field in robotics application. We sincerely thank all of the contributors to this special issue for their cooperation from the planning stage to the review process. Many thanks also go to the reviewers for their excellent work. We will be most happy if this issue aids readers in understanding recent trends in mobile robot research and furthers interest in this research field.


2017 ◽  
Vol 870 ◽  
pp. 79-84
Author(s):  
Zhen Xian Fu ◽  
Guang Ying Zhang ◽  
Yu Rong Lin ◽  
Yang Liu

Rapid progress in Micro-Electromechanical System (MEMS) technique is making inertial sensors increasingly miniaturized, enabling it to be widely applied in people’s everyday life. Recent years, research and development of wireless input device based on MEMS inertial measurement unit (IMU) is receiving more and more attention. In this paper, a survey is made of the recent research on inertial pens based on MEMS-IMU. First, the advantage of IMU-based input is discussed, with comparison with other types of input systems. Then, based on the operation of an inertial pen, which can be roughly divided into four stages: motion sensing, error containment, feature extraction and recognition, various approaches employed to address the challenges facing each stage are introduced. Finally, while discussing the future prospect of the IMU-based input systems, it is suggested that the methods of autonomous and portable calibration of inertial sensor errors be further explored. The low-cost feature of an inertial pen makes it desirable that its calibration be carried out independently, rapidly, and portably. Meanwhile, some unique features of the operational environment of an inertial pen make it possible to simplify its error propagation model and expedite its calibration, making the technique more practically viable.


2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Rodrigo Munguía ◽  
Carlos López-Franco ◽  
Emmanuel Nuño ◽  
Adriana López-Franco

This work presents a method for implementing a visual-based simultaneous localization and mapping (SLAM) system using omnidirectional vision data, with application to autonomous mobile robots. In SLAM, a mobile robot operates in an unknown environment using only on-board sensors to simultaneously build a map of its surroundings, which it uses to track its position. The SLAM is perhaps one of the most fundamental problems to solve in robotics to build mobile robots truly autonomous. The visual sensor used in this work is an omnidirectional vision sensor; this sensor provides a wide field of view which is advantageous in a mobile robot in an autonomous navigation task. Since the visual sensor used in this work is monocular, a method to recover the depth of the features is required. To estimate the unknown depth we propose a novel stochastic triangulation technique. The system proposed in this work can be applied to indoor or cluttered environments for performing visual-based navigation when GPS signal is not available. Experiments with synthetic and real data are presented in order to validate the proposal.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1950
Author(s):  
David Gualda ◽  
María Carmen Pérez-Rubio ◽  
Jesús Ureña ◽  
Sergio Pérez-Bachiller ◽  
José Manuel Villadangos ◽  
...  

Indoor positioning remains a challenge and, despite much research and development carried out in the last decade, there is still no standard as with the Global Navigation Satellite Systems (GNSS) outdoors. This paper presents an indoor positioning system called LOCATE-US with adjustable granularity for use with commercial mobile devices, such as smartphones or tablets. LOCATE-US is privacy-oriented and allows every device to compute its own position by fusing ultrasonic, inertial sensor measurements and map information. Ultrasonic Local Positioning Systems (U-LPS) based on encoded signals are placed in critical zones that require an accuracy below a few decimeters to correct the accumulated drift errors of the inertial measurements. These systems are well suited to work at room level as walls confine acoustic waves inside. To avoid audible artifacts, the U-LPS emission is set at 41.67 kHz, and an ultrasonic acquisition module with reduced dimensions is attached to the mobile device through the USB port to capture signals. Processing in the mobile device involves an improved Time Differences of Arrival (TDOA) estimation that is fused with the measurements from an external inertial sensor to obtain real-time location and trajectory display at a 10 Hz rate. Graph-matching has also been included, considering available prior knowledge about the navigation scenario. This kind of device is an adequate platform for Location-Based Services (LBS), enabling applications such as augmented reality, guiding applications, or people monitoring and assistance. The system architecture can easily incorporate new sensors in the future, such as UWB, RFiD or others.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3127
Author(s):  
Giuseppe Loprencipe ◽  
Flavio Guilherme Vaz de Almeida Filho ◽  
Rafael Henrique de Oliveira ◽  
Salvatore Bruno

Road networks are monitored to evaluate their decay level and the performances regarding ride comfort, vehicle rolling noise, fuel consumption, etc. In this study, a novel inertial sensor-based system is proposed using a low-cost inertial measurement unit (IMU) and a global positioning system (GPS) module, which are connected to a Raspberry Pi Zero W board and embedded inside a vehicle to indirectly monitor the road condition. To assess the level of pavement decay, the comfort index awz defined by the ISO 2631 standard was used. Considering 21 km of roads with different levels of pavement decay, validation measurements were performed using the novel sensor, a high performance inertial based navigation sensor, and a road surface profiler. Therefore, comparisons between awz determined with accelerations measured on the two different inertial sensors are made; in addition, also correlations between awz, and typical pavement indicators such as international roughness index, and ride number were also performed. The results showed very good correlations between the awz values calculated with the two inertial devices (R2 = 0.98). In addition, the correlations between awz values and the typical pavement indices showed promising results (R2 = 0.83–0.90). The proposed sensor may be assumed as a reliable and easy-to-install method to assess the pavement conditions in urban road networks, since the use of traditional systems is difficult and/or expensive.


1993 ◽  
Vol 5 (4) ◽  
pp. 388-400
Author(s):  
Jun'ichi Takeno ◽  
◽  
Naoto Mizuguchi ◽  
Sakae Nishiyama ◽  
Kanehiro Sorimachi ◽  
...  

Of primary importance for mobile robots is their smooth movement to the targeted destination. To achieve this purpose, mobile robots must be able to detect a person in their environment, another mobile robot, or an object not described in the map and to avoid collision with it. Recognizing the strong need for providing robots with a visual system to evade obstacles, the authors first developed a real-time visual system to detect a moving obstacle and then studied the possibility of avoiding collisions by mounting the system on a mobile robot. The visual sensor used in this system is a passive optical stereo without any mechanical moving parts. Using a special slit patten, the sensor is configured in order to split the two images obtained by individual cameras place on the right and left and to project the split images onto one CCD sensor, providing approximately 200 auto-focusing subsystems. The sub-systems can operate independently of one another, enabling real-time processing. This paper reports on a visual sensor, a solution to the measurement accuracy problem concerning the detection of moving obstacles using the sensor, and visual system experiments on real-time detection of an actually moving object using the sensor.


2020 ◽  
Vol 143 (3) ◽  
Author(s):  
Michael J. Rose ◽  
Katherine A. McCollum ◽  
Michael T. Freehill ◽  
Stephen M. Cain

Abstract Overuse injuries in youth baseball players due to throwing are at an all-time high. Traditional methods of tracking player throwing load only count in-game pitches and therefore leave many throws unaccounted for. Miniature wearable inertial sensors can be used to capture motion data outside of the lab in a field setting. The objective of this study was to develop a protocol and algorithms to detect throws and classify throw intensity in youth baseball athletes using a single, upper arm-mounted inertial sensor. Eleven participants from a youth baseball team were recruited to participate in the study. Each participant was given an inertial measurement unit (IMU) and was instructed to wear the sensor during any baseball activity for the duration of a summer season of baseball. A throw identification algorithm was developed using data from a controlled data collection trial. In this report, we present the throw identification algorithm used to identify over 17,000 throws during the 2-month duration of the study. Data from a second controlled experiment were used to build a support vector machine model to classify throw intensity. Using this classification algorithm, throws from all participants were classified as being “low,” “medium,” or “high” intensity. The results demonstrate that there is value in using sensors to count every throw an athlete makes when assessing throwing load, not just in-game pitches.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4025 ◽  
Author(s):  
Xiaosu Xu ◽  
Xinghua Liu ◽  
Beichen Zhao ◽  
Bo Yang

In this paper, an extensible positioning system for mobile robots is proposed. The system includes a stereo camera module, inertial measurement unit (IMU) and an ultra-wideband (UWB) network which includes five anchors, one of which is with the unknown position. The anchors in the positioning system are without requirements of communication between UWB anchors and without requirements of clock synchronization of the anchors. By locating the mobile robot using the original system, and then estimating the position of a new anchor using the ranging between the mobile robot and the new anchor, the system can be extended after adding the new anchor into the original system. In an unfamiliar environment (such as fire and other rescue sites), it is able to locate the mobile robot after extending itself. To add the new anchor into the positioning system, a recursive least squares (RLS) approach is used to estimate the position of the new anchor. A maximum correntropy Kalman filter (MCKF) which is based on the maximum correntropy criterion (MCC) is used to fuse data from the UWB network and IMU. The initial attitude of the mobile robot relative to the navigation frame is calculated though comparing position vectors given by a visual simultaneous localization and mapping (SLAM) system and the UWB system respectively. As shown in the experiment section, the root mean square error (RMSE) of the positioning result given by the proposed positioning system with all anchors is 0.130 m. In the unfamiliar environment, the RMSE is 0.131 m which is close to the RMSE (0.137 m) given by the original system with a difference of 0.006 m. Besides, the RMSE based on Euler distance of the new anchor is 0.061 m.


Sign in / Sign up

Export Citation Format

Share Document