scholarly journals TOWARDS AN ACCURATE LOW-COST STEREO-BASED NAVIGATION OF UNMANNED PLATFORMS IN GNSS-DENIED AREAS

Author(s):  
Z. Shtain ◽  
S. Filin

Abstract. While lightweight stereo vision sensors provide detailed and high-resolution information that allows robust and accurate localization, the computation demands required for such process is doubled compared to monocular sensors. In this paper, an alternative model for pose estimation of stereo sensors is introduced which provides an efficient and precise framework for investigating system configurations and maximize pose accuracies. Using the proposed formulation, we examine the parameters that affect accurate pose estimation and their magnitudes and show that for standard operational altitudes of ∼50 m, a five-fold improvement in localization is reached, from ∼0.4–0.5 m with a single sensor to less than 0.1 m by taking advantage of the extended field of view from both cameras. Furthermore, such improvement is reached using cameras with reduced sensor size which are more affordable. Hence, a dual-camera setup improves not only the pose estimation but also enables to use smaller sensors and reduce the overall system cost. Our analysis shows that even a slight modification in camera directions improves the positional accuracy further and yield attitude angle as accurate as ±6’ (compared to ±20’). The proposed pose estimation method relieves computational demands of traditional bundle adjustment processes and is easily integrated with other inertial sensors.

Author(s):  
A. M. G. Tommaselli ◽  
M. B. Campos ◽  
L. F. Castanheiro ◽  
E. Honkavaara

Abstract. Low cost imaging and positioning sensors are opening new frontiers for applications in near real-time Photogrammetry. Omnidirectional cameras acquiring images with 360° coverage, when combined with information coming from GNSS (Global Navigation Satellite Systems) and IMU (Inertial Measurement Unit), can efficiently estimate orientation and object space structure. However, several challenges remain in the use of low-cost sensors and image observations acquired by sensors with non-perspective inner geometry. The accuracy of the measurement using low-cost sensors is affected by different sources of errors and sensor stability. Microelectromechanical systems (MEMS) present a large gap between predicted and actual accuracy. This work presents a study on the performance of an integrated sensor orientation approach to estimate sensor orientation and 3D sparse point cloud, using an incremental bundle adjustment strategy and data coming from a low-cost portable mobile terrestrial system composed by off-theshelf navigation systems and a poly-dioptric system (Ricoh Theta S). Experiments were performed in an outdoor area (sidewalk), achieving a trajectory positional accuracy of 0.33 m and a meter level 3D reconstruction.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5428 ◽  
Author(s):  
Yibin Wu ◽  
Xiaoji Niu ◽  
Junwei Du ◽  
Le Chang ◽  
Hailiang Tang ◽  
...  

The fully autonomous operation of multirotor unmanned air vehicles (UAVs) in many applications requires support of precision landing. Onboard camera and fiducial marker have been widely used for this critical phase due to its low cost and high effectiveness. This paper proposes a six-degrees-of-freedom (DoF) pose estimation solution for UAV landing based on an artificial marker and a micro-electromechanical system (MEMS) inertial measurement unit (IMU). The position and orientation of the landing maker are measured in advance. The absolute position and heading of the UAV are estimated by detecting the marker and extracting corner points with the onboard monocular camera. To achieve continuous and reliable positioning when the marker is occasionally shadowed, IMU data is fused by an extended Kalman filter (EKF). The error terms of the IMU sensor are modeled and estimated. Field experiments show that the positioning accuracy of the proposed system is at centimeter level, and the heading error is less than 0.1 degrees. Comparing to the marker-based approach, the roll and pitch angle errors decreased by 33% and 54% on average. Within five seconds of vision outage, the average drifts of the horizontal and vertical position were 0.41 and 0.09 m, respectively.


2010 ◽  
Vol 44-47 ◽  
pp. 3781-3784
Author(s):  
Rui Hua Chang ◽  
Xiao Dong Mu ◽  
Xiao Wei Shen

An attitude estimation method is presented for a robot using low-cost solid-state inertial sensors. The attitude estimates are obtained from a complementary filter by combining the measurements from the integration of a tri-axis gyro and an aiding system mechanized using a tri-axis accelerometer and a tri-axis magnetometer. The results show that the estimation error is less than 1 degree compare to the reference attitude. It is a simple, yet effective method for attitude estimation, suitable for real-time implementation on a robot.


2015 ◽  
Vol 27 (4) ◽  
pp. 410-418 ◽  
Author(s):  
Masashi Yokozuka ◽  
◽  
Osamu Matsumoto

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/11.jpg"" width=""300"" /> Comparison of mapping results</div> This paper studies an accurate localization method to make maps for mobile robots using odometry and a global positioning system (GPS) without scan matching. We investigate requirements for GPS accuracy in map-making. To generate accurate maps, SLAM techniques such as scan matching are used to obtain accurate positions. Scan matching is unstable, however, in complex environments and has a high computation cost. To avoid these problems, we studied accurate localization without scan matching. Loop closing is an important property in generating consistent maps. Inconsistencies in maps prevent correct routes to destinations from being generated. Basically, our method adds scan data to a map along a trajectory given by odometry. Odometry accumulates errors due, e.g., to wheel slippage or wheel diameter variations. To remove this accumulated error, we used bundle adjustment, introducing two types of processing. The first is a simple manual input moving a robot to a same position at start and end. This is equal that a robot returns to a start position at end. The second process uses a GPS device to improve map accuracy. Results of experiments showed that an accurate map is generated by using wheel-encoder odometry and a low-cost GPS device. Results were evaluated using a real-time kinematic (RTK) GPS device whose accuracy is within a few centimeters. </span>


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2165 ◽  
Author(s):  
Xichao Teng ◽  
Qifeng Yu ◽  
Jing Luo ◽  
Gang Wang ◽  
Xiaohu Zhang

A robust and accurate aircraft pose estimation method is proposed in this paper. The aircraft pose reflects the flight status of the aircraft and accurate pose measurement is of great importance in many aerospace applications. This work aims to establish a universal framework to estimate the aircraft pose based on generic geometry structure features. In our method, line features are extracted to describe the structure of an aircraft in single images and the generic geometry features are exploited to form line groups for aircraft structure recognition. Parallel line clustering is utilized to detect the fuselage reference line and bilateral symmetry property of aircraft provides an important constraint for the extraction of wing edge lines under weak perspective projection. After identifying the main structure of the aircraft, a planes intersection method is used to obtain the 3D pose parameters based on the established line correspondences. Our proposed method can increase the measuring range of binocular vision sensors and has the advantage of not relying on 3D models, cooperative marks or other feature datasets. Experimental results show that our method can obtain reliable and accurate pose information of different types of aircraft.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 428 ◽  
Author(s):  
Guichao Lin ◽  
Yunchao Tang ◽  
Xiangjun Zou ◽  
Juntao Xiong ◽  
Jinhui Li

Fruit detection in real outdoor conditions is necessary for automatic guava harvesting, and the branch-dependent pose of fruits is also crucial to guide a robot to approach and detach the target fruit without colliding with its mother branch. To conduct automatic, collision-free picking, this study investigates a fruit detection and pose estimation method by using a low-cost red–green–blue–depth (RGB-D) sensor. A state-of-the-art fully convolutional network is first deployed to segment the RGB image to output a fruit and branch binary map. Based on the fruit binary map and RGB-D depth image, Euclidean clustering is then applied to group the point cloud into a set of individual fruits. Next, a multiple three-dimensional (3D) line-segments detection method is developed to reconstruct the segmented branches. Finally, the 3D pose of the fruit is estimated using its center position and nearest branch information. A dataset was acquired in an outdoor orchard to evaluate the performance of the proposed method. Quantitative experiments showed that the precision and recall of guava fruit detection were 0.983 and 0.948, respectively; the 3D pose error was 23.43° ± 14.18°; and the execution time per fruit was 0.565 s. The results demonstrate that the developed method can be applied to a guava-harvesting robot.


2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 848
Author(s):  
Karla Miriam Reyes Leiva ◽  
Milagros Jaén-Vargas ◽  
Miguel Ángel Cuba ◽  
Sergio Sánchez Lara ◽  
José Javier Serrano Olmedo

The rehabilitation of a visually impaired person (VIP) is a systematic process where the person is provided with tools that allow them to deal with the impairment to achieve personal autonomy and independence, such as training for the use of the long cane as a tool for orientation and mobility (O&M). This process must be trained personally by specialists, leading to a limitation of human, technological and structural resources in some regions, especially those with economical narrow circumstances. A system to obtain information about the motion of the long cane and the leg using low-cost inertial sensors was developed to provide an overview of quantitative parameters such as sweeping coverage and gait analysis, that are currently visually analyzed during rehabilitation. The system was tested with 10 blindfolded volunteers in laboratory conditions following constant contact, two points touch, and three points touch travel techniques. The results indicate that the quantification system is reliable for measuring grip rotation, safety zone, sweeping amplitude and hand position using orientation angles with an accuracy of around 97.62%. However, a new method or an improvement of hardware must be developed to improve gait parameters’ measurements, since the step length measurement presented a mean accuracy of 94.62%. The system requires further development to be used as an aid in the rehabilitation process of the VIP. Now, it is a simple and low-cost technological aid that has the potential to improve the current practice of O&M.


2018 ◽  
Vol 7 (4) ◽  
pp. 42 ◽  
Author(s):  
Salil Goel ◽  
Allison Kealy ◽  
Bharat Lohani

Precise localization is one of the key requirements in the deployment of UAVs (Unmanned Aerial Vehicles) for any application including precision mapping, surveillance, assisted navigation, search and rescue. The need for precise positioning is even more relevant with the increasing automation in UAVs and growing interest in commercial UAV applications such as transport and delivery. In the near future, the airspace is expected to be occupied with a large number of unmanned as well as manned aircraft, a majority of which are expected to be operating autonomously. This paper develops a new cooperative localization prototype that utilizes information sharing among UAVs and static anchor nodes for precise positioning of the UAVs. The UAVs are retrofitted with low-cost sensors including a camera, GPS receiver, UWB (Ultra Wide Band) radio and low-cost inertial sensors. The performance of the low-cost prototype is evaluated in real-world conditions in partially and obscured GNSS (Global Navigation Satellite Systems) environments. The performance is analyzed for both centralized and distributed cooperative network designs. It is demonstrated that the developed system is capable of achieving navigation grade (2–4 m) accuracy in partially GNSS denied environments, provided a consistent communication in the cooperative network is available. Furthermore, this paper provides experimental validation that information sharing is beneficial to improve positioning performance even in ideal GNSS environments. The experiments demonstrate that the major challenges for low-cost cooperative networks are consistent connectivity among UAV platforms and sensor synchronization.


Sign in / Sign up

Export Citation Format

Share Document