scholarly journals COMPUTER VISION IN THE TELEOPERATION OF THE YUTU-2 ROVER

Author(s):  
J. Wang ◽  
J. Li ◽  
S. Wang ◽  
T. Yu ◽  
Z. Rong ◽  
...  

Abstract. On January 3, 2019, the Chang'e-4 (CE-4) probe successfully landed in the Von Kármán crater inside the South Pole-Aitken (SPA) basin. With the support of a relay communication satellite "Queqiao" launched in 2018 and located at the Earth-Moon L2 liberation point, the lander and the Yutu-2 rover carried out in-situ exploration and patrol surveys, respectively, and were able to make a series of important scientific discoveries. Owing to the complexity and unpredictability of the lunar surface, teleoperation has become the most important control method for the operation of the rover. Computer vision is an important technology to support the teleoperation of the rover. During the powered descent stage and lunar surface exploration, teleoperation based on computer vision can effectively overcome many technical challenges, such as fast positioning of the landing point, high-resolution seamless mapping of the landing site, localization of the rover in the complex environment on the lunar surface, terrain reconstruction, and path planning. All these processes helped achieve the first soft landing, roving, and in-situ exploration on the lunar farside. This paper presents a high-precision positioning technology and positioning results of the landing point based on multi-source data, including orbital images and CE-4 descent images. The method and its results have been successfully applied in an actual engineering mission for the first time in China, providing important support for the topographical analysis of the landing site and mission planning for subsequent teleoperations. After landing, a 0.03 m resolution DOM was generated using the descent images and was used as one of the base maps for the overall rover path planning. Before each movement, the Yutu-2 rover controlled its hazard avoidance cameras (Hazcam), navigation cameras (Navcam), and panoramic cameras (Pancam) to capture stereo images of the lunar surface at different angles. Local digital elevation models (DEMs) with a 0.02 m resolution were routinely produced at each waypoint using the Navcam and Hazcam images. These DEMs were then used to design an obstacle recognition method and establish a model for calculating the slope, aspect, roughness, and visibility. Finally, in combination with the Yutu-2 rover mobility characteristics, a comprehensive cost map for path search was generated.By the end of the first 12 lunar days, the Yutu-2 rover has been working on the lunar farside for more than 300 days, greatly exceeding the projected service life. The rover was able to overcome the complex terrain on the lunar farside, and travelled a total distance of more than 300 m, achieving the "double three hundred" breakthrough. In future manned lunar landing and exploration of Mars by China, computer vision will play an integral role to support science target selection and scientific investigations, and will become an extremely important core technology for various engineering tasks.

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Jianjun Liu ◽  
Xin Ren ◽  
Wei Yan ◽  
Chunlai Li ◽  
He Zhang ◽  
...  

Abstract Chang’E-4 (CE-4) was the first mission to accomplish the goal of a successful soft landing on the lunar farside. The landing trajectory and the location of the landing site can be effectively reconstructed and determined using series of images obtained during descent when there were no Earth-based radio tracking and the telemetry data. Here we reconstructed the powered descent trajectory of CE-4 using photogrammetrically processed images of the CE-4 landing camera, navigation camera, and terrain data of Chang’E-2. We confirmed that the precise location of the landing site is 177.5991°E, 45.4446°S with an elevation of −5935 m. The landing location was accurately identified with lunar imagery and terrain data with spatial resolutions of 7 m/p, 5 m/p, 1 m/p, 10 cm/p and 5 cm/p. These results will provide geodetic data for the study of lunar control points, high-precision lunar mapping, and subsequent lunar exploration, such as by the Yutu-2 rover.


2021 ◽  
Vol 14 (1) ◽  
pp. 49
Author(s):  
Zongyu Yue ◽  
Ke Shi ◽  
Gregory Michael ◽  
Kaichang Di ◽  
Sheng Gou ◽  
...  

The Chang’e-4 (CE-4) lunar probe, the first soft landing spacecraft on the far side of the Moon, successfully landed in the Von Kármán crater on 3 January 2019. Geological studies of the landing area have been conducted and more intensive studies will be carried out with the in situ measured data. The chronological study of the maria basalt surrounding the CE-4 landing area is significant to the related studies. Currently, the crater size-frequency distribution (CSFD) technique is the most popular method to derive absolute model ages (AMAs) of geological units where no returned sample is available, and it has been widely used in dating maria basalt on the lunar surface. In this research, we first make a mosaic with multi-orbital Chang’e-2 (CE-2) images as a base map. Coupled with the elevation data and FeO content, nine representative areas of basalt units surrounding the CE-4 landing area are outlined and their AMAs are derived. The dating results of the nine basalt units indicate that the basalts erupted from 3.42 to 2.28 Ga ago in this area, a period much longer than derived by previous studies. The derived chronology of the above basalt units establishes a foundation for geological analysis of the returned CE-4 data.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 796
Author(s):  
Xiaoqiang Yu ◽  
Ping Wang ◽  
Zexu Zhang

Path planning is an essential technology for lunar rover to achieve safe and efficient autonomous exploration mission, this paper proposes a learning-based end-to-end path planning algorithm for lunar rovers with safety constraints. Firstly, a training environment integrating real lunar surface terrain data was built using the Gazebo simulation environment and a lunar rover simulator was created in it to simulate the real lunar surface environment and the lunar rover system. Then an end-to-end path planning algorithm based on deep reinforcement learning method is designed, including state space, action space, network structure, reward function considering slip behavior, and training method based on proximal policy optimization. In addition, to improve the generalization ability to different lunar surface topography and different scale environments, a variety of training scenarios were set up to train the network model using the idea of curriculum learning. The simulation results show that the proposed planning algorithm can successfully achieve the end-to-end path planning of the lunar rover, and the path generated by the proposed algorithm has a higher safety guarantee compared with the classical path planning algorithm.


Author(s):  
Haipeng Chen ◽  
Wenxing Fu ◽  
Yuze Feng ◽  
Jia Long ◽  
Kang Chen

In this article, we propose an efficient intelligent decision method for a bionic motion unmanned system to simulate the formation change during the hunting process of the wolves. Path planning is a burning research focus for the unmanned system to realize the formation change, and some traditional techniques are designed to solve it. The intelligent decision based on evolutionary algorithms is one of the famous path planning approaches. However, time consumption remains to be a problem in the intelligent decisions of the unmanned system. To solve the time-consuming problem, we simplify the multi-objective optimization as the single-objective optimization, which was regarded as a multiple traveling salesman problem in the traditional methods. Besides, we present the improved genetic algorithm instead of evolutionary algorithms to solve the intelligent decision problem. As the unmanned system’s intelligent decision is solved, the bionic motion control, especially collision avoidance when the system moves, should be guaranteed. Accordingly, we project a novel unmanned system bionic motion control of complex nonlinear dynamics. The control method can effectively avoid collision in the process of system motion. Simulation results show that the proposed simplification, improved genetic algorithm, and bionic motion control method are stable and effective.


The Analyst ◽  
2012 ◽  
Vol 137 (6) ◽  
pp. 1363 ◽  
Author(s):  
Jobin Cyriac ◽  
Michael Wleklinski ◽  
Guangtao Li ◽  
Liang Gao ◽  
R. Graham Cooks

2020 ◽  
Vol 49 (5) ◽  
pp. 20190460
Author(s):  
王振超 Zhenchao Wang ◽  
柳稼航 Jiahang Liu ◽  
盛庆红 Qinghong Sheng ◽  
吴昀昭 Yunzhao Wu

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 613
Author(s):  
David Safadinho ◽  
João Ramos ◽  
Roberto Ribeiro ◽  
Vítor Filipe ◽  
João Barroso ◽  
...  

The capability of drones to perform autonomous missions has led retail companies to use them for deliveries, saving time and human resources. In these services, the delivery depends on the Global Positioning System (GPS) to define an approximate landing point. However, the landscape can interfere with the satellite signal (e.g., tall buildings), reducing the accuracy of this approach. Changes in the environment can also invalidate the security of a previously defined landing site (e.g., irregular terrain, swimming pool). Therefore, the main goal of this work is to improve the process of goods delivery using drones, focusing on the detection of the potential receiver. We developed a solution that has been improved along its iterative assessment composed of five test scenarios. The built prototype complements the GPS through Computer Vision (CV) algorithms, based on Convolutional Neural Networks (CNN), running in a Raspberry Pi 3 with a Pi NoIR Camera (i.e., No InfraRed—without infrared filter). The experiments were performed with the models Single Shot Detector (SSD) MobileNet-V2, and SSDLite-MobileNet-V2. The best results were obtained in the afternoon, with the SSDLite architecture, for distances and heights between 2.5–10 m, with recalls from 59%–76%. The results confirm that a low computing power and cost-effective system can perform aerial human detection, estimating the landing position without an additional visual marker.


Sign in / Sign up

Export Citation Format

Share Document