scholarly journals A Mobile Robot Position Adjustment as a Fusion of Vision System and Wheels Odometry in Autonomous Track Driving

2021 ◽  
Vol 11 (10) ◽  
pp. 4496
Author(s):  
Jarosław Zwierzchowski ◽  
Dawid Pietrala ◽  
Jan Napieralski ◽  
Andrzej Napieralski

Autonomous mobile vehicles need advanced systems to determine their exact position in a certain coordinate system. For this purpose, the GPS and the vision system are the most often used. These systems have some disadvantages, for example, the GPS signal is unavailable in rooms and may be inaccurate, while the vision system is strongly dependent on the intensity of the recorded light. This paper assumes that the primary system for determining the position of the vehicle is wheel odometry joined with an IMU (Internal Measurement Unit) sensor, which task is to calculate all changes in the robot orientations, such as yaw rate. However, using only the results coming from the wheels system provides additive measurement error, which is most often the result of the wheels slippage and the IMU sensor drift. In the presented work, this error is reduced by using a vision system that constantly measures vehicle distances to markers located in its space. Additionally, the paper describes the fusion of signals from the vision system and the wheels odometry. Studies related to the positioning accuracy of the vehicle with both the vision system turned on and off are presented. The laboratory averaged positioning accuracy result was reduced from 0.32 m to 0.13 m, with ensuring that the vehicle wheels did not experience slippage. The paper also describes the performance of the system during a real track driven, where the assumption was not to use the GPS geolocation system. In this case, the vision system assisted in the vehicle positioning and an accuracy of 0.2 m was achieved at the control points.

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2302
Author(s):  
Kai Zhu ◽  
Xuan Guo ◽  
Changhui Jiang ◽  
Yujingyang Xue ◽  
Yuanjun Li ◽  
...  

With the rapid development of autonomous vehicles, the demand for reliable positioning results is urgent. Currently, the ground vehicles heavily depend on the Global Navigation Satellite System (GNSS) and the Inertial Navigation System (INS) providing reliable and continuous navigation solutions. In dense urban areas, especially narrow streets with tall buildings, the GNSS signals are possibly blocked by the surrounding tall buildings, and under this condition, the geometry distribution of the in-view satellites is very poor, and the None-Line-Of-Sight (NLOS) and Multipath (MP) heavily affects the positioning accuracy. Further, the INS positioning errors will quickly diverge over time without the GNSS correction. Aiming at improving the position accuracy under signal challenging environment, in this paper, we developed an MIMU(Micro Inertial Measurement Unit)/Odometer integration system with vehicle state constraints (MO-C) for improving the vehicle positioning accuracy without GNSS. MIMU/Odometer integration model and the constrained measurements are given in detail. Several field tests were carried out for evaluating and assessing the MO-C system. The experiments were divided into two parts, firstly, field testing with data post-processing and real-time processing was carried out for fully assessing the performance of the MO-C system. Secondly, the MO-C was implemented in the BeiDou Satellite Navigation System (BDS)/integrated navigation system (INS) for evaluating the MO-C performance during the BDS signal outage. The MIMU standalone positioning accuracy was compared with that from the MIMU/Odometer integration (MO), MO-C and MIMU with constraints (M-C) for assessing the Odometer, and the influence of the constraint on the positioning errors reduction. The results showed that the latitude and longitude errors could be suppressed with Odometer assisting, and the height errors were suppressed while the state constraints were included.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 391
Author(s):  
Luca Bigazzi ◽  
Stefano Gherardini ◽  
Giacomo Innocenti ◽  
Michele Basso

In this paper, solutions for precise maneuvering of an autonomous small (e.g., 350-class) Unmanned Aerial Vehicles (UAVs) are designed and implemented from smart modifications of non expensive mass market technologies. The considered class of vehicles suffers from light load, and, therefore, only a limited amount of sensors and computing devices can be installed on-board. Then, to make the prototype capable of moving autonomously along a fixed trajectory, a “cyber-pilot”, able on demand to replace the human operator, has been implemented on an embedded control board. This cyber-pilot overrides the commands thanks to a custom hardware signal mixer. The drone is able to localize itself in the environment without ground assistance by using a camera possibly mounted on a 3 Degrees Of Freedom (DOF) gimbal suspension. A computer vision system elaborates the video stream pointing out land markers with known absolute position and orientation. This information is fused with accelerations from a 6-DOF Inertial Measurement Unit (IMU) to generate a “virtual sensor” which provides refined estimates of the pose, the absolute position, the speed and the angular velocities of the drone. Due to the importance of this sensor, several fusion strategies have been investigated. The resulting data are, finally, fed to a control algorithm featuring a number of uncoupled digital PID controllers which work to bring to zero the displacement from the desired trajectory.


Aerospace ◽  
2020 ◽  
Vol 7 (12) ◽  
pp. 168
Author(s):  
Robert Głębocki ◽  
Mariusz Jacewicz

In vertical cold launch the missile starts without the function of the main engine. Over the launcher, the attitude of the missile is controlled by a set of lateral thrusters. However, a quick turn might be disturbed by various uncertainties. This study discusses the problem of the influences of disturbances and the repeatability of lateral thrusters’ ignition on the pitch maneuver quality. The generic 152.4 mm projectile equipped in small, solid propellant lateral thrusters was used as a test platform. A six degree of freedom mathematical model was developed to execute the Monte-Carlo simulations of the launch phase and to prepare the flight test campaign. The parametric analysis was performed to investigate the influence of system uncertainties on quick turn repeatability. A series of ground laboratory trials was accomplished. Thirteen flight tests were completed on the missile test range. The flight parameters were measured using an onboard inertial measurement unit and a ground vision system. It was experimentally proved that the cold vertical launch maneuver could be realized properly with at least two lateral motors. It was found that the initial roll rate of the projectile and the lateral thrusters ‘igniters’ uncertainties could affect the pitch angle achieved and must be minimized to reduce the projectile dispersion.


GEOMATICA ◽  
2016 ◽  
Vol 70 (1) ◽  
pp. 21-30 ◽  
Author(s):  
Chris Hugenholtz ◽  
Owen Brown ◽  
Jordan Walker ◽  
Thomas Barchyn ◽  
Paul Nesbit ◽  
...  

Mapping with unmanned aerial vehicles (UAVs) typically involves the deployment of ground control points (GCPs) to georeference the images and topographic model. An alternative approach is direct geo ref er encing, whereby the onboard Global Navigation Satellite System (GNSS) and inertial measurement unit are used without GCPs to locate and orient the data. This study compares the spatial accuracy of these approaches using two nearly identical UAVs. The onboard GNSS is the one difference between them, as one vehicle uses a survey-grade GNSS/RTK receiver (RTK UAV), while the other uses a lower-grade GPS receiver (non-RTK UAV). Field testing was performed at a gravel pit, with all ground measurements and aerial sur vey ing completed on the same day. Three sets of orthoimages and DSMs were produced for comparing spa tial accuracies: two sets were created by direct georeferencing images from the RTK UAV and non-RTK UAV and one set was created by using GCPs during the external orientation of the non-RTK UAV images. Spatial accuracy was determined from the horizontal (X,Y) and vertical (Z) residuals and root-mean-square-errors (RMSE) relative to 17 horizontal and 180 vertical check points measured with a GNSS/RTK base station and rover. For the two direct georeferencing datasets, the horizontal and vertical accuracy improved substantially with the survey-grade GNSS/RTK receiver onboard the RTK UAV, effectively reducing the RMSE values in X, Y and Z by 1 to 2 orders of magnitude compared to the lower grade GPS receiver onboard the non-RTK UAV. Importantly, the horizontal accuracy of the RTK UAV data processed via direct georeferencing was equivalent to the horizontal accuracy of the non-RTK UAV data processed with GCPs, but the vertical error of the DSM from the RTK UAV data was 2 to 3 times greater than the DSM from the non-RTK data with GCPs. Overall, results suggest that direct georeferencing with the RTK UAV can achieve horizontal accuracy comparable to that obtained with a network of GCPs, but for topographic meas urements requiring the highest achievable accuracy, researchers and practitioners should use GCPs.


2011 ◽  
Vol 308-310 ◽  
pp. 351-355
Author(s):  
Syed Ghafoor Shah ◽  
Gui Li Xu ◽  
Wei Ji Ni ◽  
Yong Qiang Ye

This paper proposes a new method for measuring 3D coordinates of a point using a single camera vision system. The contact point is determined by using 3D force sensors. In addition, the force limiting system has also been incorporated to improve accuracy of the results. 3D point is captured when the touching probe senses the force up to certain limit and subsequently recording of that point is initiated. The points being recorded are then processed for the required feature calculation such as distance between planes, angle, radius etc. The IMU (inertial measurement unit) initially estimates the target plane position which enables the whole system to perform the required task quickly. Hence, this system can be used for continuous scanning of any surface.


2013 ◽  
Vol 278-280 ◽  
pp. 1237-1241
Author(s):  
Jun Wei Yu ◽  
Nan Liu ◽  
Gui Cai Wang ◽  
Xiao Bo Jin

A novel technique of vision-aided navigation for autonomous aircraft is presented in this paper. The aircraft’s position and pose are estimated with several control points. The saliency descriptor of corner is defined and the control points are selected according their saliency. Control points are tracked in sequential images based on Fourier-Melline transform. The unscented Kalman filter is used to fuse the aircraft state information provided by the vision system and the inertial navigation system. Experiments show that the accuracy, efficiency and robustness of aircraft navigation system are improved with the proposed method.


2012 ◽  
Vol 226-228 ◽  
pp. 1958-1964
Author(s):  
Weian Wang ◽  
Shu Ying Xu ◽  
Gang Qiao

This paper investigates the geo-positioning accuracy of across-track QuickBird stereo imagery in Shanghai, China, where the terrain relief is very low about 3m but with very high buildings up to 380m. The rational function model (RFM) and the bias-compensated RFM with different parameters are employed to do accuracy analysis with different configurations of ground control points (GCPs). The systematic errors in vendor provided RPCs are revealed and discussed. The results of bias-compensated RFM show that different strategies in terms of the number of GCP and different geometric correction methods should be taken into consideration in order for a better and reasonable positioning accuracy in the three directions. The results also show that the best accuracy of 0.6m in horizontal direction and 0.8m in vertical direction can be acquired by the second-order polynomial model when GCPs are more than 8.


Author(s):  
B. Leroux ◽  
J. Cali ◽  
J. Verdun ◽  
L. Morel ◽  
H. He

Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2–3 centimeters between the control point coordinates measured and those already known.


Author(s):  
Chien-Hsun Chu ◽  
Kai-Wei Chiang

The early development of mobile mapping system (MMS) was restricted to applications that permitted the determination of the elements of exterior orientation from existing ground control. Mobile mapping refers to a means of collecting geospatial data using mapping sensors that are mounted on a mobile platform. Research works concerning mobile mapping dates back to the late 1980s. This process is mainly driven by the need for highway infrastructure mapping and transportation corridor inventories. In the early nineties, advances in satellite and inertial technology made it possible to think about mobile mapping in a different way. Instead of using ground control points as references for orienting the images in space, the trajectory and attitude of the imager platform could now be determined directly. Cameras, along with navigation and positioning sensors are integrated and mounted on a land vehicle for mapping purposes. Objects of interest can be directly measured and mapped from images that have been georeferenced using navigation and positioning sensors. Direct georeferencing (DG) is the determination of time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using the Global Navigation Satellite System (GNSS) and inertial navigation using an Inertial Measuring Unit (IMU). Although either technology used along could in principle determine both position and orientation, they are usually integrated in such a way that the IMU is the main orientation sensor, while the GNSS receiver is the main position sensor. However, GNSS signals are obstructed due to limited number of visible satellites in GNSS denied environments such as urban canyon, foliage, tunnel and indoor that cause the GNSS gap or interfered by reflected signals that cause abnormal measurement residuals thus deteriorates the positioning accuracy in GNSS denied environments. This study aims at developing a novel method that uses ground control points to maintain the positioning accuracy of the MMS in GNSS denied environments. At last, this study analyses the performance of proposed method using about 20 check-points through DG process.


Sign in / Sign up

Export Citation Format

Share Document