scholarly journals From 2D Images to 3D Tangible Models: Autostereoscopic and Haptic Visualization of Martian Rocks in Virtual Environments

2007 ◽  
Vol 16 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Cagatay Basdogan

A planetary rover acquires a large collection of images while exploring its surrounding environment. For example, 2D stereo images of the Martian surface captured by the lander and the Sojourner rover during the Mars Pathfinder mission in 1997 were transmitted to Earth for scientific analysis and navigation planning. Due to the limited memory and computational power of the Sojourner rover, most of the images were captured by the lander and then transmitted to Earth directly for processing. If these images were merged together at the rover site to reconstruct a 3D representation of the rover's environment using its on-board resources, more information could potentially be transmitted to Earth in a compact manner. However, construction of a 3D model from multiple views is a highly challenging task to accomplish even for the new generation rovers (Spirit and Opportunity) running on the Mars surface at the time this article was written. Moreover, low transmission rates and communication intervals between Earth and Mars make the transmission of any data more difficult. We propose a robust and computationally efficient method for progressive transmission of multi-resolution 3D models of Martian rocks and soil reconstructed from a series of stereo images. For visualization of these models on Earth, we have developed a new multimodal visualization setup that integrates vision and touch. Our scheme for 3D reconstruction of Martian rocks from 2D images for visualization on Earth involves four main steps: a) acquisition of scans: depth maps are generated from stereo images, b) integration of scans: the scans are correctly positioned and oriented with respect to each other and fused to construct a 3D volumetric representation of the rocks using an octree, c) transmission: the volumetric data is encoded and progressively transmitted to Earth, d) visualization: a surface model is reconstructed from the transmitted data on Earth and displayed to a user through a new autostereoscopic visualization table and a haptic device for providing touch feedback. To test the practical utility of our approach, we first captured a sequence of stereo images of a rock surface from various viewpoints in JPL MarsYard using a mobile cart and then performed a series of 3D reconstruction experiments. In this paper, we discuss the steps of our reconstruction process, our multimodal visualization system, and the tradeoffs that have to be made to transmit multiresolution 3D models to Earth in an efficient manner under the constraints of limited computational resources, low transmission rate, and communication interval between Earth and Mars.

Author(s):  
Z. Chen ◽  
B. Wu ◽  
W. C. Liu

Abstract. The paper presents our efforts on CNN-based 3D reconstruction of the Martian surface using monocular images. The Viking colorized global mosaic and Mar Express HRSC blended DEM are used as training data. An encoder-decoder network system is employed in the framework. The encoder section extracts features from the images, which includes convolution layers and reduction layers. The decoder section consists of deconvolution layers and is to integrate features and convert the images to desired DEMs. In addition, skip connection between encoder and decoder section is applied, which offers more low-level features for the decoder section to improve its performance. Monocular Context Camera (CTX) images are used to test and verify the performance of the proposed CNN-based approach. Experimental results show promising performances of the proposed approach. Features in images are well utilized, and topographical details in images are successfully recovered in the DEMs. In most cases, the geometric accuracies of the generated DEMs are comparable to those generated by the traditional technology of photogrammetry using stereo images. The preliminary results show that the proposed CNN-based approach has great potential for 3D reconstruction of the Martian surface.


2017 ◽  
Vol 1 (2) ◽  
pp. 380-395 ◽  
Author(s):  
Fabrizio Ivan Apollonio ◽  
Federico Fallavollita ◽  
Elisabetta Caterina Giovannini ◽  
Riccardo Foschi ◽  
Salvatore Corso

Among the many cases concerning the process of digital hypothetical 3D reconstruction a particular case is constituted by never realized projects and plans. They constitute projects designed and remained on paper that, albeit documented by technical drawings, they pose the typical problems that are common to all other cases. From 3D reconstructions of transformed architectures, to destroyed/lost buildings and part of towns.This case studies start from original old drawings which has to be implemented by different kind of documentary sources, able to provide - by means evidence, induction, deduction, analogy - information characterized by different level of uncertainty and related to different level of accuracy.All methods adopted in a digital hypothetical 3D reconstruction process show us that the goal of all researchers is to be able to make explicit, or at least intelligible, through a graphical system a synthetic/communicative level representative or the value of the reconstructive process that is behind a particular result.The result of a reconstructive process acts in the definition of three areas intimately related one each other which concur to define the digital consistency of the artifact object of study: Shape (geometry, size, spatial position); Appearance (surface features); Constitutive elements (physical form, stratification of building/manufacturing systems)The paper, within a general framework aimed to use 3D models as a means to document and communicate the shape and appearance of never built architecture, as well as to depict temporal correspondence and allow the traceability of uncertainty and accuracy that characterizes each reconstructed element.  


2018 ◽  
Vol 2 ◽  
pp. e26561
Author(s):  
Jiangning Wang ◽  
Jing Ren ◽  
Tianyu Xi ◽  
Siqin Ge ◽  
Liqiang Ji

With the continuous development of imaging technology, the amount of insect 3D data is increasing, but research on data management is still virtually non-existent. This paper will discuss the specifications and standards relevant to the process of insect 3D data acquisition, processing and analysis. The collection of 3D data of insects includes specimen collection, sample preparation, image scanning specifications and 3D model specification. The specimen collection information uses existing biodiversity information standards such as Darwin Core. However, the 3D scanning process contains unique specifications for specimen preparation, depending on the scanning equipment, to achieve the best imaging results. Data processing of 3D images includes 3D reconstruction, tagging morphological structures (such as muscle and skeleton), and 3D model building. There are different algorithms in the 3D reconstruction process, but the processing results generally follow DICOM (Digital Imaging and Communications in Medicine) standards. There is no available standard for marking morphological structures, because this process is currently executed by individual researchers who create operational specifications according to their own needs. 3D models have specific file specifications, such as object files (https://en.wikipedia.org/wiki/Wavefront_.obj_file) and 3D max format (https://en.wikipedia.org/wiki/.3ds), which are widely used at present. There are only some simple tools for analysis of three-dimensional data and there are no specific standards or specifications in Audubon Core (https://terms.tdwg.org/wiki/Audubon_Core), the TDWG standard for biodiversity-related multi-media. There are very few 3D databases of animals at this time. Most of insect 3D data are created by individual entomologists and are not even stored in databases. Specifications for the management of insect 3D data need to be established step-by-step. Based on our attempt to construct a database of 3D insect data, we preliminarily discuss the necessary specifications.


2021 ◽  
Vol 13 (5) ◽  
pp. 839
Author(s):  
Zeyu Chen ◽  
Bo Wu ◽  
Wai Chung Liu

Three-dimensional (3D) surface models, e.g., digital elevation models (DEMs), are important for planetary exploration missions and scientific research. Current DEMs of the Martian surface are mainly generated by laser altimetry or photogrammetry, which have respective limitations. Laser altimetry cannot produce high-resolution DEMs; photogrammetry requires stereo images, but high-resolution stereo images of Mars are rare. An alternative is the convolutional neural network (CNN) technique, which implicitly learns features by assigning corresponding inputs and outputs. In recent years, CNNs have exhibited promising performance in the 3D reconstruction of close-range scenes. In this paper, we present a CNN-based algorithm that is capable of generating DEMs from single images; the DEMs have the same resolutions as the input images. An existing low-resolution DEM is used to provide global information. Synthetic and real data, including context camera (CTX) images and DEMs from stereo High-Resolution Imaging Science Experiment (HiRISE) images, are used as training data. The performance of the proposed method is evaluated using single CTX images of representative landforms on Mars, and the generated DEMs are compared with those obtained from stereo HiRISE images. The experimental results show promising performance of the proposed method. The topographic details are well reconstructed, and the geometric accuracies achieve root-mean-square error (RMSE) values ranging from 2.1 m to 12.2 m (approximately 0.5 to 2 pixels in the image space). The experimental results show that the proposed CNN-based method has great potential for 3D surface reconstruction in planetary applications.


Author(s):  
K. Zhan ◽  
Y. Song ◽  
D. Fritsch ◽  
G. Mammadov ◽  
J. Wagner

Abstract. Nowadays various methods and sensors are available for 3D reconstruction tasks; however, it is still necessary to integrate advantages of different technologies for optimizing the quality 3D models. Computed tomography (CT) is an imaging technique which takes a large number of radiographic measurements from different angles, in order to generate slices of the object, however, without colour information. The aim of this study is to put forward a framework to extract colour information from photogrammetric images for corresponding Computed Tomography (CT) surface data with high precision. The 3D models of the same object from CT and photogrammetry methods are generated respectively, and a transformation matrix is determined to align the extracted CT surface to the photogrammetric point cloud through a coarse-to-fine registration process. The estimated pose information of images to the photogrammetric point clouds, which can be obtained from the standard image alignment procedure, also applies to the aligned CT surface data. For each camera pose, a depth image of CT data is calculated by projecting all the CT points to the image plane. The depth image is in principle should agree with the corresponding photogrammetric image. The points, which cannot be seen from the pose, but are also projected on the depth image, are excluded from the colouring process. This is realized by comparing the range values of neighbouring pixels and finding the corresponding 3D points with larger range values. The same procedure is implemented for all the image poses to obtain the coloured CT surface. Thus, by using photogrammetric images, we achieve a coloured CT dataset with high precision, which combines the advantages from both methods. Rather than simply stitching different data, we deep-dive into the photogrammetric 3D reconstruction process and optimize the CT data with colour information. This process can also provide an initial route and more options for other data fusion processes.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2695 ◽  
Author(s):  
Ana-Maria Loghin ◽  
Johannes Otepka-Schremmer ◽  
Norbert Pfeifer

High-resolution stereo and multi-view imagery are used for digital surface model (DSM) derivation over large areas for numerous applications in topography, cartography, geomorphology, and 3D surface modelling. Dense image matching is a key component in 3D reconstruction and mapping, although the 3D reconstruction process encounters difficulties for water surfaces, areas with no texture or with a repetitive pattern appearance in the images, and for very small objects. This study investigates the capabilities and limitations of space-borne very high resolution imagery, specifically Pléiades (0.70 m) and WorldView-3 (0.31 m) imagery, with respect to the automatic point cloud reconstruction of small isolated objects. For this purpose, single buildings, vehicles, and trees were analyzed. The main focus is to quantify their detectability in the photogrammetrically-derived DSMs by estimating their heights as a function of object type and size. The estimated height was investigated with respect to the following parameters: building length and width, vehicle length and width, and tree crown diameter. Manually measured object heights from the oriented images were used as a reference. We demonstrate that the DSM-based estimated height of a single object strongly depends on its size, and we quantify this effect. Starting from very small objects, which are not elevated against their surroundings, and ending with large objects, we obtained a gradual increase of the relative heights. For small vehicles, buildings, and trees (lengths <7 pixels, crown diameters <4 pixels), the Pléiades-derived DSM showed less than 20% or none of the actual object’s height. For large vehicles, buildings, and trees (lengths >14 pixels, crown diameters >7 pixels), the estimated heights were higher than 60% of the real values. In the case of the WorldView-3 derived DSM, the estimated height of small vehicles, buildings, and trees (lengths <16 pixels, crown diameters <8 pixels) was less than 50% of their actual height, whereas larger objects (lengths >33 pixels, crown diameters >16 pixels) were reconstructed at more than 90% in height.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


2018 ◽  
Vol 23 (6) ◽  
pp. 99-113
Author(s):  
Sha LIU ◽  
Feng YANG ◽  
Shunxi WANG ◽  
Yu CHEN

Sign in / Sign up

Export Citation Format

Share Document