scholarly journals The Mars 2020 Engineering Cameras and Microphone on the Perseverance Rover: A Next-Generation Imaging System for Mars Exploration

2020 ◽  
Vol 216 (8) ◽  
Author(s):  
J. N. Maki ◽  
D. Gruel ◽  
C. McKinney ◽  
M. A. Ravine ◽  
M. Morales ◽  
...  

AbstractThe Mars 2020 Perseverance rover is equipped with a next-generation engineering camera imaging system that represents an upgrade over previous Mars rover missions. These upgrades will improve the operational capabilities of the rover with an emphasis on drive planning, robotic arm operation, instrument operations, sample caching activities, and documentation of key events during entry, descent, and landing (EDL). There are a total of 16 cameras in the Perseverance engineering imaging system, including 9 cameras for surface operations and 7 cameras for EDL documentation. There are 3 types of cameras designed for surface operations: Navigation cameras (Navcams, quantity 2), Hazard Avoidance Cameras (Hazcams, quantity 6), and Cachecam (quantity 1). The Navcams will acquire color stereo images of the surface with a $96^{\circ}\times 73^{\circ}$ 96 ∘ × 73 ∘ field of view at 0.33 mrad/pixel. The Hazcams will acquire color stereo images of the surface with a $136^{\circ}\times 102^{\circ}$ 136 ∘ × 102 ∘ at 0.46 mrad/pixel. The Cachecam, a new camera type, will acquire images of Martian material inside the sample tubes during caching operations at a spatial scale of 12.5 microns/pixel. There are 5 types of EDL documentation cameras: The Parachute Uplook Cameras (PUCs, quantity 3), the Descent stage Downlook Camera (DDC, quantity 1), the Rover Uplook Camera (RUC, quantity 1), the Rover Descent Camera (RDC, quantity 1), and the Lander Vision System (LVS) Camera (LCAM, quantity 1). The PUCs are mounted on the parachute support structure and will acquire video of the parachute deployment event as part of a system to characterize parachute performance. The DDC is attached to the descent stage and pointed downward, it will characterize vehicle dynamics by capturing video of the rover as it descends from the skycrane. The rover-mounted RUC, attached to the rover and looking upward, will capture similar video of the skycrane from the vantage point of the rover and will also acquire video of the descent stage flyaway event. The RDC, attached to the rover and looking downward, will document plume dynamics by imaging the Martian surface before, during, and after rover touchdown. The LCAM, mounted to the bottom of the rover chassis and pointed downward, will acquire $90^{\circ}\times 90^{\circ}$ 90 ∘ × 90 ∘ FOV images during the parachute descent phase of EDL as input to an onboard map localization by the Lander Vision System (LVS). The rover also carries a microphone, mounted externally on the rover chassis, to capture acoustic signatures during and after EDL. The Perseverance rover launched from Earth on July 30th, 2020, and touchdown on Mars is scheduled for February 18th, 2021.

Author(s):  
Sung-Yong Lim ◽  
Hyunseok Yang ◽  
SeungHon Yoo ◽  
Han Baek Lee ◽  
Young Do Choi

Disc cartridge for archive data storage is made up of multi slots array and each slot keeping a disc. However, misalignment between slot and disc is caused by various disturbances sources. The archive data storage should be operated stably, able to cope with the misalignment. Because, misalignment can cause disc crash against slot walls, and miss positioning between transfer robot and disc slot. Therefore, proper detecting misalignment method should be adopted in archive data storage. In this paper, analyze allowable misalignment and propose dual sensing method based on vision system. Suggested method can simultaneously detect the upper and lower part of slots using only one detector. Each image is split by changing each optical path intentionally through lens shifting of 4f imaging system.


It is a well-known fact that when a camera or other imaging system captures an image, often, the vision system for which it is captured cannot implement it directly. There may be several reasons behind this fact such as there can exist random intensity variation in the image. There can also be illumination variation in the image or poor contrast. These drawbacks must be tackled at the primitive stages for optimum vision processing. This chapter will discuss different filtering approaches for this purpose. The chapter begins with the Gaussian filter, followed by a brief review of different often used approaches. Moreover, this chapter will also render different filtering approaches including their hardware architectures.


2016 ◽  
Vol 42 (4) ◽  
pp. 251-259 ◽  
Author(s):  
I. G. Mitrofanov ◽  
A. S. Kozyrev ◽  
D. I. Lisov ◽  
A. A. Vostrukhin ◽  
D. V. Golovin ◽  
...  

Author(s):  
Yiran Wang ◽  
Bo Wu

Images from two sensors, the High-Resolution Imaging Science Experiment (HiRISE) and the Context Camera (CTX), both on-board the Mars Reconnaissance Orbiter (MRO), were used to generate high-quality DEMs (Digital Elevation Models) of the Martian surface. However, there were discrepancies between the DEMs generated from the images acquired by these two sensors due to various reasons, such as variations in boresight alignment between the two sensors during the flight in the complex environment. This paper presents a systematic investigation of the discrepancies between the DEMs generated from the HiRISE and CTX images. A combined adjustment algorithm is presented for the co-registration of HiRISE and CTX DEMs. Experimental analysis was carried out using the HiRISE and CTX images collected at the Mars Rover landing site and several other typical regions. The results indicated that there were systematic offsets between the HiRISE and CTX DEMs in the longitude and latitude directions. However, the offset in the altitude was less obvious. After combined adjustment, the offsets were eliminated and the HiRISE and CTX DEMs were co-registered to each other. The presented research is of significance for the synergistic use of HiRISE and CTX images for precision Mars topographic mapping.


Author(s):  
Z. Chen ◽  
B. Wu ◽  
W. C. Liu

Abstract. The paper presents our efforts on CNN-based 3D reconstruction of the Martian surface using monocular images. The Viking colorized global mosaic and Mar Express HRSC blended DEM are used as training data. An encoder-decoder network system is employed in the framework. The encoder section extracts features from the images, which includes convolution layers and reduction layers. The decoder section consists of deconvolution layers and is to integrate features and convert the images to desired DEMs. In addition, skip connection between encoder and decoder section is applied, which offers more low-level features for the decoder section to improve its performance. Monocular Context Camera (CTX) images are used to test and verify the performance of the proposed CNN-based approach. Experimental results show promising performances of the proposed approach. Features in images are well utilized, and topographical details in images are successfully recovered in the DEMs. In most cases, the geometric accuracies of the generated DEMs are comparable to those generated by the traditional technology of photogrammetry using stereo images. The preliminary results show that the proposed CNN-based approach has great potential for 3D reconstruction of the Martian surface.


2007 ◽  
Vol 16 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Cagatay Basdogan

A planetary rover acquires a large collection of images while exploring its surrounding environment. For example, 2D stereo images of the Martian surface captured by the lander and the Sojourner rover during the Mars Pathfinder mission in 1997 were transmitted to Earth for scientific analysis and navigation planning. Due to the limited memory and computational power of the Sojourner rover, most of the images were captured by the lander and then transmitted to Earth directly for processing. If these images were merged together at the rover site to reconstruct a 3D representation of the rover's environment using its on-board resources, more information could potentially be transmitted to Earth in a compact manner. However, construction of a 3D model from multiple views is a highly challenging task to accomplish even for the new generation rovers (Spirit and Opportunity) running on the Mars surface at the time this article was written. Moreover, low transmission rates and communication intervals between Earth and Mars make the transmission of any data more difficult. We propose a robust and computationally efficient method for progressive transmission of multi-resolution 3D models of Martian rocks and soil reconstructed from a series of stereo images. For visualization of these models on Earth, we have developed a new multimodal visualization setup that integrates vision and touch. Our scheme for 3D reconstruction of Martian rocks from 2D images for visualization on Earth involves four main steps: a) acquisition of scans: depth maps are generated from stereo images, b) integration of scans: the scans are correctly positioned and oriented with respect to each other and fused to construct a 3D volumetric representation of the rocks using an octree, c) transmission: the volumetric data is encoded and progressively transmitted to Earth, d) visualization: a surface model is reconstructed from the transmitted data on Earth and displayed to a user through a new autostereoscopic visualization table and a haptic device for providing touch feedback. To test the practical utility of our approach, we first captured a sequence of stereo images of a rock surface from various viewpoints in JPL MarsYard using a mobile cart and then performed a series of 3D reconstruction experiments. In this paper, we discuss the steps of our reconstruction process, our multimodal visualization system, and the tradeoffs that have to be made to transmit multiresolution 3D models to Earth in an efficient manner under the constraints of limited computational resources, low transmission rate, and communication interval between Earth and Mars.


Author(s):  
Muhammad Musaddique Ali Rafique

Development of rovers and development of infrastructure which enables them to probe other planets (such as Mars) have sparked a lot of interest recently specially with increasing public attention in Moon and Mars program by National Aeronautics and Space Administration. This is designed to be achieved by various means such as advanced spectroscopy and artificial intelligent techniques such as deep learning and transfer learning to enable the rover to not only map the surface of planet but to get a detailed information about its chemical makeup in layers beneath (deep learning) and in areas around point of observation (transfer learning). In this work, which is part of a proposal, later approach is explored. A systematic strategy is presented which make use of aforementioned techniques developed for metallic glass matrix composites as benchmark and helps develop algorithms for chemistry mapping of actual Martian surface on Perseverance Rover launching shortly.


2019 ◽  
Vol 32 (4) ◽  
pp. 396-424 ◽  
Author(s):  
Linda Murphy ◽  
Jolien Huybrechts ◽  
Frank Lambrechts

Adopting an interpretive grounded theory approach, we find that key events in the early lives of next-generation family members fuel a sense of belonging and identity, which lies at the heart of their socioemotional wealth. As next-generation family members interact more with the family business, they interpret nonfinancial aspects of the firm as an answer to a larger variety of affective needs, which broadens and strengthens their interactive socioemotional wealth frame of mind. In line with our life course theory lens, we observe how key events that build up socioemotional wealth greatly influence the life paths of next-generation family members.


Sign in / Sign up

Export Citation Format

Share Document