SPOC: Deep Learning-based Terrain Classification for Mars Rover Missions

Author(s):  
Brandon Rothrock ◽  
Ryan Kennedy ◽  
Chris Cunningham ◽  
Jeremie Papon ◽  
Matthew Heverly ◽  
...  
Author(s):  
Muhammad Musaddique Ali Rafique

Development of rovers and development of infrastructure which enables them to probe other planets (such as Mars) have sparked a lot of interest recently specially with increasing public attention in Moon and Mars program by National Aeronautics and Space Administration. This is designed to be achieved by various means such as advanced spectroscopy and artificial intelligent techniques such as deep learning and transfer learning to enable the rover to not only map the surface of planet but to get a detailed information about its chemical makeup in layers beneath (deep learning) and in areas around point of observation (transfer learning). In this work, which is part of a proposal, later approach is explored. A systematic strategy is presented which make use of aforementioned techniques developed for metallic glass matrix composites as benchmark and helps develop algorithms for chemistry mapping of actual Martian surface on Perseverance Rover launching shortly.


Author(s):  
Ying Qu ◽  
Hairong Qi ◽  
Chiman Kwan

There are two mast cameras (Mastcam) onboard the Mars rover Curiosity. Both Mastcams are multispectral imagers with nine bands in each. The right Mastcam has three times higher resolution than the left. In this chapter, we apply some recently developed deep neural network models to enhance the left Mastcam images with help from the right Mastcam images. Actual Mastcam images were used to demonstrate the performance of the proposed algorithms.


2020 ◽  
Author(s):  
Kaizad Raimalwala ◽  
Michele Faragalli ◽  
Melissa Battler ◽  
Evan Smal ◽  
Ewan Reid ◽  
...  

Icarus ◽  
2022 ◽  
Vol 371 ◽  
pp. 114701
Author(s):  
Alexander M. Barrett ◽  
Matthew R. Balme ◽  
Mark Woods ◽  
Spyros Karachalios ◽  
Danilo Petrocelli ◽  
...  

Author(s):  
S Julius Fusic ◽  
K Hariharan ◽  
R Sitharthan ◽  
S Karthikeyan

Autonomous transportation is a new paradigm of an Industry 5.0 cyber-physical system that provides a lot of opportunities in smart logistics applications. The safety and reliability of deep learning-driven systems are still a question under research. The safety of an autonomous guided vehicle is dependent on the proper selection of sensors and the transmission of reflex data. Several academics worked on sensor-based difficulties by developing a sensor correction system and fine-tuning algorithms to regulate the system’s efficiency and precision. In this paper, the introduction of vision sensor and its scene terrain classification using a deep learning algorithm is performed with proposed datasets during sensor failure conditions. The proposed classification technique is to identify the mobile robot obstacle and obstacle-free path for smart logistic vehicle application. To analyze the information from the acquired image datasets, the proposed classification algorithm employs segmentation techniques. The analysis of proposed dataset is validated with U-shaped convolutional network (U-Net) architecture and region-based convolutional neural network (Mask R-CNN) architecture model. Based on the results, the selection of 1400 raw image datasets is trained and validated using semantic segmentation classifier models. For various terrain dataset clusters, the Mask R-CNN classifier model method has the highest model accuracy of 93%, that is, 23% higher than the U-Net classifier model algorithm, which has the lowest model accuracy nearly 70%. As a result, the suggested Mask R-CNN technique has a significant potential of being used in autonomous vehicle applications.


Author(s):  
Stellan Ohlsson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document