scholarly journals Specifications and Standards for Insect 3D Data

2018 ◽  
Vol 2 ◽  
pp. e26561
Author(s):  
Jiangning Wang ◽  
Jing Ren ◽  
Tianyu Xi ◽  
Siqin Ge ◽  
Liqiang Ji

With the continuous development of imaging technology, the amount of insect 3D data is increasing, but research on data management is still virtually non-existent. This paper will discuss the specifications and standards relevant to the process of insect 3D data acquisition, processing and analysis. The collection of 3D data of insects includes specimen collection, sample preparation, image scanning specifications and 3D model specification. The specimen collection information uses existing biodiversity information standards such as Darwin Core. However, the 3D scanning process contains unique specifications for specimen preparation, depending on the scanning equipment, to achieve the best imaging results. Data processing of 3D images includes 3D reconstruction, tagging morphological structures (such as muscle and skeleton), and 3D model building. There are different algorithms in the 3D reconstruction process, but the processing results generally follow DICOM (Digital Imaging and Communications in Medicine) standards. There is no available standard for marking morphological structures, because this process is currently executed by individual researchers who create operational specifications according to their own needs. 3D models have specific file specifications, such as object files (https://en.wikipedia.org/wiki/Wavefront_.obj_file) and 3D max format (https://en.wikipedia.org/wiki/.3ds), which are widely used at present. There are only some simple tools for analysis of three-dimensional data and there are no specific standards or specifications in Audubon Core (https://terms.tdwg.org/wiki/Audubon_Core), the TDWG standard for biodiversity-related multi-media. There are very few 3D databases of animals at this time. Most of insect 3D data are created by individual entomologists and are not even stored in databases. Specifications for the management of insect 3D data need to be established step-by-step. Based on our attempt to construct a database of 3D insect data, we preliminarily discuss the necessary specifications.

2009 ◽  
Vol 2009 ◽  
pp. 1-6 ◽  
Author(s):  
Mingquan Zhou ◽  
Qingsong Huo ◽  
Guohua Geng ◽  
Xiaojing Liu

As the numbers of 3D models available grow in many application fields, there is an increasing need for a search method to help people find them. Unfortunately, traditional search techniques are not always effective for 3D data. In this paper, we describe a novel method of interactive 3D model retrieval with building blocks. First, by using a cube block as the baseblock in a 3D virtual space, we may construct the query model with human-computer interaction method. Then through retrieving the polygon model of the database generated by the voxel model, we may get retrieval results in real time. Experiments are conducted to evaluate the performance of the proposed method.


2020 ◽  
Vol 8 (3) ◽  
pp. 143-150
Author(s):  
Haqul Baramsyah ◽  
Less Rich

The digital single lens reflex (DSLR) cameras have been widely accepted to use in slope face photogrammetry rather than the expensive metric camera used for aerial photogrammetry. 3D models generated from digital photogrammetry can approach those generated from terrestrial laser scanning in term of scale and level of detail. It is cost effective and has equipment portability. This paper presents and discusses the applicability of close-range digital photogrammetry to produce 3D models of rock slope faces. Five experiments of image capturing method were conducted to capture the photographs as the input data for processing. As a consideration, the appropriate baseline lengths to capture the slope face to get better result are around 1/6 to 1/8 of target distance.  A fine quality of 3D model from data processing is obtained using strip method and convergent method with 80% overlapping in each photograph. A random camera positions with different distances from the slope face can also generate a good 3D model, however the entire target should be captured in each photograph. The accuracy of the models is generated by comparing the 3D models produced from photogrammetry with the 3D data obtained from laser scanner. The accuracy of 3D models is quite satisfactory with the mean error range from 0.008 to 0.018 m.


Author(s):  
M. Mehranfar ◽  
H. Arefi ◽  
F. Alidoost

Abstract. This paper presents a projection-based method for 3D bridge modeling using dense point clouds generated from drone-based images. The proposed workflow consists of hierarchical steps including point cloud segmentation, modeling of individual elements, and merging of individual models to generate the final 3D model. First, a fuzzy clustering algorithm including the height values and geometrical-spectral features is employed to segment the input point cloud into the main bridge elements. In the next step, a 2D projection-based reconstruction technique is developed to generate a 2D model for each element. Next, the 3D models are reconstructed by extruding the 2D models orthogonally to the projection plane. Finally, the reconstruction process is completed by merging individual 3D models and forming an integrated 3D model of the bridge structure in a CAD format. The results demonstrate the effectiveness of the proposed method to generate 3D models automatically with a median error of about 0.025 m between the elements’ dimensions in the reference and reconstructed models for two different bridge datasets.


2007 ◽  
Vol 16 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Cagatay Basdogan

A planetary rover acquires a large collection of images while exploring its surrounding environment. For example, 2D stereo images of the Martian surface captured by the lander and the Sojourner rover during the Mars Pathfinder mission in 1997 were transmitted to Earth for scientific analysis and navigation planning. Due to the limited memory and computational power of the Sojourner rover, most of the images were captured by the lander and then transmitted to Earth directly for processing. If these images were merged together at the rover site to reconstruct a 3D representation of the rover's environment using its on-board resources, more information could potentially be transmitted to Earth in a compact manner. However, construction of a 3D model from multiple views is a highly challenging task to accomplish even for the new generation rovers (Spirit and Opportunity) running on the Mars surface at the time this article was written. Moreover, low transmission rates and communication intervals between Earth and Mars make the transmission of any data more difficult. We propose a robust and computationally efficient method for progressive transmission of multi-resolution 3D models of Martian rocks and soil reconstructed from a series of stereo images. For visualization of these models on Earth, we have developed a new multimodal visualization setup that integrates vision and touch. Our scheme for 3D reconstruction of Martian rocks from 2D images for visualization on Earth involves four main steps: a) acquisition of scans: depth maps are generated from stereo images, b) integration of scans: the scans are correctly positioned and oriented with respect to each other and fused to construct a 3D volumetric representation of the rocks using an octree, c) transmission: the volumetric data is encoded and progressively transmitted to Earth, d) visualization: a surface model is reconstructed from the transmitted data on Earth and displayed to a user through a new autostereoscopic visualization table and a haptic device for providing touch feedback. To test the practical utility of our approach, we first captured a sequence of stereo images of a rock surface from various viewpoints in JPL MarsYard using a mobile cart and then performed a series of 3D reconstruction experiments. In this paper, we discuss the steps of our reconstruction process, our multimodal visualization system, and the tradeoffs that have to be made to transmit multiresolution 3D models to Earth in an efficient manner under the constraints of limited computational resources, low transmission rate, and communication interval between Earth and Mars.


Author(s):  
Z. Shao ◽  
C. Li ◽  
S. Zhong ◽  
B. Liu ◽  
H. Jiang ◽  
...  

Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.


2017 ◽  
Vol 1 (2) ◽  
pp. 380-395 ◽  
Author(s):  
Fabrizio Ivan Apollonio ◽  
Federico Fallavollita ◽  
Elisabetta Caterina Giovannini ◽  
Riccardo Foschi ◽  
Salvatore Corso

Among the many cases concerning the process of digital hypothetical 3D reconstruction a particular case is constituted by never realized projects and plans. They constitute projects designed and remained on paper that, albeit documented by technical drawings, they pose the typical problems that are common to all other cases. From 3D reconstructions of transformed architectures, to destroyed/lost buildings and part of towns.This case studies start from original old drawings which has to be implemented by different kind of documentary sources, able to provide - by means evidence, induction, deduction, analogy - information characterized by different level of uncertainty and related to different level of accuracy.All methods adopted in a digital hypothetical 3D reconstruction process show us that the goal of all researchers is to be able to make explicit, or at least intelligible, through a graphical system a synthetic/communicative level representative or the value of the reconstructive process that is behind a particular result.The result of a reconstructive process acts in the definition of three areas intimately related one each other which concur to define the digital consistency of the artifact object of study: Shape (geometry, size, spatial position); Appearance (surface features); Constitutive elements (physical form, stratification of building/manufacturing systems)The paper, within a general framework aimed to use 3D models as a means to document and communicate the shape and appearance of never built architecture, as well as to depict temporal correspondence and allow the traceability of uncertainty and accuracy that characterizes each reconstructed element.  


10.14311/672 ◽  
2005 ◽  
Vol 45 (1) ◽  
Author(s):  
J. Hodač

The development of methods for 3D data acquisition, together with progress in information technologies raises the question of creating and using 3D models and 3D information systems (IS) of historical sites and buildings. This paper presents the current state of the “Live Theatre” project. The theme of the project is the proposal and realisation of a 3D IS of the baroque theatre at Eeský Krumlov castle (UNESCO site).The project is divided into three main stages – creation of a 3D model, proposal of a conception for a 3D IS, and realisation of a functional prototype. 3D data was acquired by means of photogrammetric and surveying methods. An accurate 3D model (photo-realistic, textured) was built up with MicroStation CAD system. The proposal of a conception of a 3D IS was the main outcome of the author’s dissertation. The essential feature of the proposed conception is the creation of subsystems targeted on three spheres – management, research and presentation of the site. The functionality of each subsystem is connected with its related sphere; however, each subsystem uses the same database. The present stage of the project involves making a functional prototype (with sample data). During this stage we are working on several basic technological topics. At present we are concerned with 3D data, its formats, format conversions (e.g. DGN _ VRML) and its connection to other types of data. After that, we will be seeking a convenient technical solution based on network technologies (Internet) and an appropriate layout for the data (database). The project is being carried out in close co-operation with the administration of the castle and some other partners. This stage of the project will be completed in December 2005.A functional prototype and the information acquired by testing it will form the basis for the final proposal of a complex IS of a historical site. The final proposal and appropriate technology will be the outcome of the project. The realisation of a complex 3D IS will then follow. The results will be exploitable both for site management and for organisations working in the area of presenting historical sites and creating multimedia shows. 


Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2288
Author(s):  
Rohan Tahir ◽  
Allah Bux Sargano ◽  
Zulfiqar Habib

In recent years, learning-based approaches for 3D reconstruction have gained much popularity due to their encouraging results. However, unlike 2D images, 3D cannot be represented in its canonical form to make it computationally lean and memory-efficient. Moreover, the generation of a 3D model directly from a single 2D image is even more challenging due to the limited details available from the image for 3D reconstruction. Existing learning-based techniques still lack the desired resolution, efficiency, and smoothness of the 3D models required for many practical applications. In this paper, we propose voxel-based 3D object reconstruction (V3DOR) from a single 2D image for better accuracy, one using autoencoders (AE) and another using variational autoencoders (VAE). The encoder part of both models is used to learn suitable compressed latent representation from a single 2D image, and a decoder generates a corresponding 3D model. Our contribution is twofold. First, to the best of the authors’ knowledge, it is the first time that variational autoencoders (VAE) have been employed for the 3D reconstruction problem. Second, the proposed models extract a discriminative set of features and generate a smoother and high-resolution 3D model. To evaluate the efficacy of the proposed method, experiments have been conducted on a benchmark ShapeNet data set. The results confirm that the proposed method outperforms state-of-the-art methods.


Author(s):  
K. Zhan ◽  
Y. Song ◽  
D. Fritsch ◽  
G. Mammadov ◽  
J. Wagner

Abstract. Nowadays various methods and sensors are available for 3D reconstruction tasks; however, it is still necessary to integrate advantages of different technologies for optimizing the quality 3D models. Computed tomography (CT) is an imaging technique which takes a large number of radiographic measurements from different angles, in order to generate slices of the object, however, without colour information. The aim of this study is to put forward a framework to extract colour information from photogrammetric images for corresponding Computed Tomography (CT) surface data with high precision. The 3D models of the same object from CT and photogrammetry methods are generated respectively, and a transformation matrix is determined to align the extracted CT surface to the photogrammetric point cloud through a coarse-to-fine registration process. The estimated pose information of images to the photogrammetric point clouds, which can be obtained from the standard image alignment procedure, also applies to the aligned CT surface data. For each camera pose, a depth image of CT data is calculated by projecting all the CT points to the image plane. The depth image is in principle should agree with the corresponding photogrammetric image. The points, which cannot be seen from the pose, but are also projected on the depth image, are excluded from the colouring process. This is realized by comparing the range values of neighbouring pixels and finding the corresponding 3D points with larger range values. The same procedure is implemented for all the image poses to obtain the coloured CT surface. Thus, by using photogrammetric images, we achieve a coloured CT dataset with high precision, which combines the advantages from both methods. Rather than simply stitching different data, we deep-dive into the photogrammetric 3D reconstruction process and optimize the CT data with colour information. This process can also provide an initial route and more options for other data fusion processes.


Author(s):  
A. Zingoni ◽  
M. Diani ◽  
G. Corsini ◽  
A. Masini

We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.


Sign in / Sign up

Export Citation Format

Share Document