scholarly journals Enhanced Lunar Topographic Mapping Using Multiple Stereo Images Taken by Yutu-2 Rover with Changing Illumination Conditions

2021 ◽  
Vol 87 (8) ◽  
pp. 567-576
Author(s):  
Wenhui Wan ◽  
Jia Wang ◽  
Kaichang Di ◽  
Jian Li ◽  
Zhaoqin Liu ◽  
...  

In a planetary-rover exploration mission, stereovision-based 3D reconstruction has been widely applied to topographic mapping of the planetary surface using stereo cameras onboard the rover. In this study, we propose an enhanced topographic mapping method based on multiple stereo images taken at the same rover location with changing illumination conditions. Key steps of the method include dense matching of stereo images, 3D point-cloud generation, point-cloud co-registration, and fusion. The final point cloud has more complete coverage and more details of the terrain than that conventionally generated from a single stereo pair. The effectiveness of the proposed method is verified by experiments using the Yutu-2 rover, in which two data sets were acquired by the navigation cameras at two locations and under changing illumination conditions. This method, which does not involve complex operations, has great potential for application in planetary-rover and lander missions.

Author(s):  
H. Hu ◽  
B. Wu

The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity map for the stereo pair and each correspondence is transformed back to the owner and 3D points are derived through photogrammetric space intersection. Experimental results reveal that the proposed method is able to reduce gaps and inconsistencies caused by the inaccurate boresight offsets between the two NAC cameras and the irregular overlapping regions, and finally generate precise and consistent 3D surface models from the NAC stereo images automatically.


Author(s):  
M. Peng ◽  
W. Wan ◽  
Y. Xing ◽  
Y. Wang ◽  
Z. Liu ◽  
...  

RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.


RSC Advances ◽  
2019 ◽  
Vol 9 (14) ◽  
pp. 7757-7766 ◽  
Author(s):  
Yao Wu ◽  
Xin-Ying Gao ◽  
Xin-Hui Chen ◽  
Shao-Long Zhang ◽  
Wen-Juan Wang ◽  
...  

Our study gains insight into the development of novel specific ABCG2 inhibitors, and develops a comprehensive computational strategy to understand protein ligand interaction with the help of AlphaSpace, a fragment-centric topographic mapping tool.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 263
Author(s):  
Munan Yuan ◽  
Xiru Li ◽  
Longle Cheng ◽  
Xiaofeng Li ◽  
Haibo Tan

Alignment is a critical aspect of point cloud data (PCD) processing, and we propose a coarse-to-fine registration method based on bipartite graph matching in this paper. After data pre-processing, the registration progress can be detailed as follows: Firstly, a top-tail (TT) strategy is designed to normalize and estimate the scale factor of two given PCD sets, which can combine with the coarse alignment process flexibly. Secondly, we utilize the 3D scale-invariant feature transform (3D SIFT) method to extract point features and adopt fast point feature histograms (FPFH) to describe corresponding feature points simultaneously. Thirdly, we construct a similarity weight matrix of the source and target point data sets with bipartite graph structure. Moreover, the similarity weight threshold is used to reject some bipartite graph matching error-point pairs, which determines the dependencies of two data sets and completes the coarse alignment process. Finally, we introduce the trimmed iterative closest point (TrICP) algorithm to perform fine registration. A series of extensive experiments have been conducted to validate that, compared with other algorithms based on ICP and several representative coarse-to-fine alignment methods, the registration accuracy and efficiency of our method are more stable and robust in various scenes and are especially more applicable with scale factors.


Author(s):  
A. W. Lyda ◽  
X. Zhang ◽  
C. L. Glennie ◽  
K. Hudnut ◽  
B. A. Brooks

Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground deformation results and statistics from these techniques are presented and discussed here with supplementary analyses of the differences between techniques and the effects of temporal spacing between LiDAR datasets. Results show that both change detection methods provide consistent near field deformation comparable to field observed offsets. The deformation can vary in quality but estimated standard deviations are always below thirty one centimeters. This variation in quality differentiates the methods and proves that factors such as geodetic markers and temporal spacing play major roles in the outcomes of ALS change detection surveys.


Author(s):  
W. C. Liu ◽  
B. Wu

High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.


Author(s):  
Q. Kang ◽  
G. Huang ◽  
S. Yang

Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.


Author(s):  
Muhamad Alrajhi ◽  
Khurram Shahzad Janjua ◽  
Mohammad Afroz Khan ◽  
Abdalla Alobeid

Kingdom of Saudi Arabia is one of the most dynamic countries of the world. We have witnessed a very rapid urban development's which are altering Kingdom’s landscape on daily basis. In recent years a substantial increase in urban populations is observed which results in the formation of large cities. Considering this fast paced growth, it has become necessary to monitor these changes, in consideration with challenges faced by aerial photography projects. It has been observed that data obtained through aerial photography has a lifecycle of 5-years because of delay caused by extreme weather conditions and dust storms which acts as hindrances or barriers during aerial imagery acquisition, which has increased the costs of aerial survey projects. All of these circumstances require that we must consider some alternatives that can provide us easy and better ways of image acquisition in short span of time for achieving reliable accuracy and cost effectiveness. The approach of this study is to conduct an extensive comparison between different resolutions of data sets which include: Orthophoto of (10 cm) GSD, Stereo images of (50 cm) GSD and Stereo images of (1 m) GSD, for map updating. Different approaches have been applied for digitizing buildings, roads, tracks, airport, roof level changes, filling stations, buildings under construction, property boundaries, mosques buildings and parking places.


Author(s):  
O. Majgaonkar ◽  
K. Panchal ◽  
D. Laefer ◽  
M. Stanley ◽  
Y. Zaki

Abstract. Classifying objects within aerial Light Detection and Ranging (LiDAR) data is an essential task to which machine learning (ML) is applied increasingly. ML has been shown to be more effective on LiDAR than imagery for classification, but most efforts have focused on imagery because of the challenges presented by LiDAR data. LiDAR datasets are of higher dimensionality, discontinuous, heterogenous, spatially incomplete, and often scarce. As such, there has been little examination into the fundamental properties of the training data required for acceptable performance of classification models tailored for LiDAR data. The quantity of training data is one such crucial property, because training on different sizes of data provides insight into a model’s performance with differing data sets. This paper assesses the impact of training data size on the accuracy of PointNet, a widely used ML approach for point cloud classification. Subsets of ModelNet ranging from 40 to 9,843 objects were validated on a test set of 400 objects. Accuracy improved logarithmically; decelerating from 45 objects onwards, it slowed significantly at a training size of 2,000 objects, corresponding to 20,000,000 points. This work contributes to the theoretical foundation for development of LiDAR-focused models by establishing a learning curve, suggesting the minimum quantity of manually labelled data necessary for satisfactory classification performance and providing a path for further analysis of the effects of modifying training data characteristics.


2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773540 ◽  
Author(s):  
Robert A Hewitt ◽  
Alex Ellery ◽  
Anton de Ruiter

A classifier training methodology is presented for Kapvik, a micro-rover prototype. A simulated light detection and ranging scan is divided into a grid, with each cell having a variety of characteristics (such as number of points, point variance and mean height) which act as inputs to classification algorithms. The training step avoids the need for time-consuming and error-prone manual classification through the use of a simulation that provides training inputs and target outputs. This simulation generates various terrains that could be encountered by a planetary rover, including untraversable ones, in a random fashion. A sensor model for a three-dimensional light detection and ranging is used with ray tracing to generate realistic noisy three-dimensional point clouds where all points that belong to untraversable terrain are labelled explicitly. A neural network classifier and its training algorithm are presented, and the results of its output as well as other popular classifiers show high accuracy on test data sets after training. The network is then tested on outdoor data to confirm it can accurately classify real-world light detection and ranging data. The results show the network is able to identify terrain correctly, falsely classifying just 4.74% of untraversable terrain.


Sign in / Sign up

Export Citation Format

Share Document