scholarly journals 43 Digital Human Project: 3D Photogrammetry for Human Cadaveric Pelvic Specimens: An Innovation in Colorectal Anatomical Education

2021 ◽  
Vol 108 (Supplement_2) ◽  
Author(s):  
J Fletcher ◽  
T Heinze ◽  
T Wedel ◽  
D Miskovic

Abstract Introduction Cadaveric dissection remains an essential aspect of anatomical education but is not readily available to the majority of surgical trainees. 3D photogrammetry is the process of creating a 3D model from a series of 2D images and has tremendous potential in anatomical education. We describe a novel low-cost single-camera 3D photogrammetry technique to reconstruct cadaveric specimens as digital models. Method A formalin preserved hemipelvis was mounted on a turntable. Photos were taken sequentially at 5 o increments through 360° at three different fixed viewing angles (n = 216 photos) using a mirrorless camera with a 12-60mm f3.5-5.6 kit lens. Four surroundings LED standing lights were used to ensure diffuse ambient lighting of the specimen. Photos were imported into Agisoft Metashape software in order to generate a point cloud and produce the final virtual model composed of a polygon mesh. Results The specimen was successfully reconstructed and can be visualised at; https://sketchfab.com/3d-models/pelvic-sidewall-b76450b787824c968f864791d47318f2. The total processing time was 20 hrs. Conclusions Through this technique, we can produce accurate, interactive, and accessible 3D prosection models for surgical education. The method could be employed to establish a digital library of human anatomy for surgical training worldwide.

Author(s):  
M. Abdelaziz ◽  
M. Elsayed

<p><strong>Abstract.</strong> Underwater photogrammetry in archaeology in Egypt is a completely new experience applied for the first time on the submerged archaeological site of the lighthouse of Alexandria situated on the eastern extremity of the ancient island of Pharos at the foot of Qaitbay Fort at a depth of 2 to 9 metres. In 2009/2010, the CEAlex launched a 3D photogrammetry data-gathering programme for the virtual reassembly of broken artefacts. In 2013 and the beginning of 2014, with the support of the Honor Frost Foundation, methods were developed and refined to acquire manual photographic data of the entire underwater site of Qaitbay using a DSLR camera, simple and low cost materials to obtain a digital surface model (DSM) of the submerged site of the lighthouse, and also to create 3D models of the objects themselves, such as statues, bases of statues and architectural elements. In this paper we present the methodology used for underwater data acquisition, data processing and modelling in order to generate a DSM of the submerged site of Alexandria’s ancient lighthouse. Until 2016, only about 7200&amp;thinsp;m<sup>2</sup> of the submerged site, which exceeds more than 13000&amp;thinsp;m<sup>2</sup>, was covered. One of our main objectives in this project is to georeference the site since this would allow for a very precise 3D model and for correcting the orientation of the site as regards the real-world space.</p>


Author(s):  
Agnieszka Chmurzynska ◽  
Karolina Hejbudzka ◽  
Andrzej Dumalski

During the last years the softwares and applications that can produce 3D models using low-cost methods have become very popular. What is more, they can be successfully competitive with the classical methods. The most wellknown and applied technology used to create 3D models has been laser scanning so far. However it is still expensive because of the price of the device and software. That is why the universality and accessibility of this method is very limited. Hence, the new low cost methods of obtaining the data needed to generate 3D models appeare on the market and creating 3D models have become much easier and accessible to a wider group of people. Because of their advantages they can be competitive with the laser scanning. One of the methods uses digital photos to create 3D models. Available software allows us to create a model and object geometry. Also very popular in the gaming environment device – Kinect Sensor can be successfully used as a different method to create 3D models. This article presents basic issues of 3D modelling and application of various devices, which are commonly used in our life and they can be used to generate a 3D model as well. Their results are compared with the model derived from the laser scanning. The acquired results with graphic presentations and possible ways of applications are also presented in this paper.


2015 ◽  
Vol 4 (2) ◽  
pp. 48-57
Author(s):  
Naci Yastikli ◽  
Zehra Erisir ◽  
Pelin Altintas ◽  
Tugba Cak

The reverse engineering applications has gained great momentum in industrial production with developments in the fields of computer vision and computer-aided design (CAD). The reproduction of an existing product or a spare part, reproduction of an existing surface, elimination of the defect or improvement of the available product are the goals of industrial reverse engineering applications. The first and the most important step in reverse engineering applications is the generation of the three dimensional (3D) metric model of an existing product in computer environment. After this stage, many operations such as the preparation of molds for mass production, the performance testing, the comparison of the existing product with other products and prototypes which are available on the market are performed by using the generated 3D models. In reverse engineering applications, the laser scanner system or digital terrestrial photogrammetry methods, also called contactless method, are preferred for the generation of the 3D models. In particular, terrestrial photogrammetry has become a popular method since require only photographs for the 3-dimensional drawing, the generation of the dense point cloud using the image matching algorithms and the orthoimage generation as well as its low cost. In this paper, an industrial application of 3D information modelling is presented which concerns the measurement and 3D metric modelling of the ship model. The possible usage of terrestrial photogrammetry in reverse engineering application is investigated based on low cost photogrammetric system. The main aim was the generation of the dense point cloud and 3D line drawing of the ship model by using terrestrial photogrammetry, for the production of the ship in real size as a reverse engineering application. For this purpose, the images were recorded with digital SLR camera and orientations have been performed. Then 3D line drawing operations, point cloud and orthoimage generations have been accomplished by using PhotoModeler software. As a result of the proposed terrestrial photogrammetric steps, 0.5 mm spaced dense point cloud and orthoimage have been generated. The obtained results from experimental study were discussed and possible use of proposed methods was evaluated for reverse engineering application.


2019 ◽  
Vol 10 (20) ◽  
pp. 70
Author(s):  
Gabriela Lorenzo ◽  
Luciano Lopez ◽  
Reinaldo A. Moralejo ◽  
Luis M. Del Papa

<p>Photogrammetry has recently been incorporated into archaeological research, replacing much more expensive techniques while still generating high resolution results. This technique converts two dimensional (2D) images into three-dimensional (3D) models, allowing for the complex analysis of geometric and spatial information. It has become one of the most used methods for the 3D recording of cultural heritage objects. Among its possible archaeological uses are: digitally documenting an archaeological dig at low cost, aiding the decision-making process (Dellepiane et al., 2013); spatial surveying of archaeological sites; 3D model generation of archaeological objects and digitisation of archaeological collections (Adami et al., 2018; Aparicio Resco et al., 2014; Cots et al., 2018; Iturbe et al., 2018; Moyano, 2017).</p><p>The objective of this paper is to show the applicability of 3D models based on SfM (Structure from Motion) photogrammetry for archaeofauna analyses. We created 3D models of four camelid (Lama glama) bone elements (skull, radius-ulna, metatarsus and proximal phalange), aiming to demonstrate the advantages of 3D models over 2D osteological guides, which are usually used to perform anatomical and systematic determination of specimens.</p><p>Photographs were taken with a 16 Megapixel Nikon D5100 DSLR camera mounted on a tripod, with the distance to the object ranging between 1 and 3 m and using a 50mm fixed lens. Each bone element was placed on a 1 m tall stool, with a green, high contrast background. Photographs were shot at regular intervals of 10-15º, moving in a circle. Sets of around 30 pictures were taken from three circumferences at vertical angles of 0º, 45º and 60º. In addition, some detailed and overhead shots were taken from the dorsal and ventral sides of each bone element. Each set of dorsal and ventral photos was imported to Agisoft Photoscan Professional. A workflow (Fig. 4) of alignment, tie point matching, high resolution 3D dense point cloud construction, and creation of a triangular mesh covered with a photographic texture was performed. Finally the dorsal and ventral models were aligned and merged and the 3D model was accurately scaled. In order to determine accuracy of the models, linear measurements were performed and compared to a digital gauge measurement of the physical bones, obtaining a difference of less than 0.5 mm.</p><p>Furthermore, five archaeological specimens were selected to compare our 3D models with the most commonly used 2D camelid atlas (Pacheco Torres et al., 1986; Sierpe, 2015). In the particular case of archaeofaunal analyses, where anatomical and systematic determination of the specimens is the key, digital photogrammetry has proven to be more effective than traditional 2D documentation methods. This is due to the fact that 2D osteological guides based on drawings or pictures lack the necessary viewing angles to perform an adequate and complete diagnosis of the specimens. Using new technology can deliver better results, producing more comprehensive information of the bone element, with great detail and geometrical precision and not limited to pictures or drawings at particular angles. In this paper we can see how 3D modelling with SfM-MVS (Structure from Motion-Multi View Stereo) allows the observation of an element from multiple angles. The possibility of zooming and rotating the models (Figs. 6g, 6h, 7d, 8c) improves the determination of the archaeological specimens.</p><p>Information on how the 3D model was produced is essential. A metadata file must include data on each bone element (anatomical and taxonomic) plus information on photographic quantity and quality. This file must also contain the software used to produce the model and the parameters and resolution of each step of the workflow (number of 3D points, mesh vertices, texture resolution and quantification of the error of the model). In short, 3D models are excellent tools for osteological guides.</p>


Author(s):  
G. Kontogianni ◽  
R. Chliverou ◽  
A. Koutsoudis ◽  
G. Pavlidis ◽  
A. Georgopoulos

The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.


Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.


Author(s):  
A. Dlesk ◽  
K. Vach ◽  
P. Holubec

Abstract. This paper shows the possibilities of using low-cost photogrammetry for interior mapping as a tool to gather fast and accurate data for 3D modelling and BIM. To create a 3D model of a building interior with a high level of detail requires techniques such as laser scanning and photogrammetry. In the case of photogrammetry, it is possible to use standard cameras and SfM software to create an accurate point cloud which can be used for 3D modelling and then for BIM. The images captured indoor are often captured under lower light conditions. Using different exposure during capturing of images of building interior was tested. Frequent plain walls of a building interior cause that the images are usually lack of any features and their photogrammetric processing is getting much more difficult. In some cases, results of photogrammetric processing are poor and inaccurate. In this paper, an experiment of creating a 3D model of a building interior using photogrammetric processing of images was carried out. For this experiment digital camera with two different lenses (16 mm lens and fisheye lens) was used. For photogrammetric processing were chosen different software. All the results were compared to each other and to the laser scanning data of the interior. At the end of the paper, the discussion of the advantages and disadvantages of the shown method has been made.


Author(s):  
V. Katsichti ◽  
G. Kontogianni ◽  
A. Georgopoulos

Abstract. In archaeological excavations, many small fragments or artefacts are revealed whose fine details sometimes should be captured in 3D. In general, 3D documentation methods fall into two main categories: Range-Based modelling and Image-Based modelling. In Range Based modelling, a laser scanner (Time of Flight, Structured light, etc.) is used for the raw data acquisition in order to create the 3D model of an object. The above method is accurate enough but is still very expensive in terms of equipment. On the other hand, Image-Based modelling, is affordable because the equipment required is merely a camera with the appropriate lens, and possibly a turntable and a tripod. In this case, the 3D model of an object is created by suitable processing of images which are taken around the object with a large overlap. In this paper, emphasis is given on the effectiveness of 3D models of frail archaeological finds originate from the palatial site of Ayios Vasileios in Laconia in the south-eastern Peloponnese, using low-cost equipment and methods. The 3D model is also produced using various, mainly freeware, hence low-cost, software and the results are compared to those from a well-established commercial one.


Author(s):  
M. Canciani ◽  
E. Conigliaro ◽  
M. Del Grasso ◽  
P. Papalini ◽  
M. Saccone

The development of close-range photogrammetry has produced a lot of new possibility to study cultural heritage. 3D data acquired with conventional and low cost cameras can be used to document, investigate the full appearance, materials and conservation status, to help the restoration process and identify intervention priorities. At the same time, with 3D survey a lot of three-dimensional data are collected and analyzed by researchers, but there are a very few possibility of 3D output. The augmented reality is one of this possible output with a very low cost technology but a very interesting result. Using simple mobile technology (for iPad and Android Tablets) and shareware software (in the case presented “Augment”) it is possible to share and visualize a large number of 3D models with your own device. The case study presented is a part of an architecture graduate thesis, made in Rome at Department of Architecture of Roma Tre University. We have developed a photogrammetric survey to study the Aurelian Wall at Castra Praetoria in Rome. The surveys of 8000 square meters of surface have allowed to identify stratigraphy and construction phases of a complex portion of Aurelian Wall, specially about the Northern door of Castra. During this study, the data coming out of 3D survey (photogrammetric and topographic), are stored and used to create a reverse 3D model, or virtual reconstruction, of the Northern door of Castra. This virtual reconstruction shows the door in the Tiberian period, nowadays it's totally hidden by a curtain wall but, little and significative architectural details allow to know its original feature. The 3D model of the ancient walls has been mapped with the exact type of bricks and mortar, oriented and scaled according to the existing one to use augmented reality. Finally, two kind of application have been developed, one on site, were you can see superimposed the virtual reconstruction on the existing walls using the image recognition. On the other hand, to show the results also during the graduation day, the same application has been created in off-site condition using a poster.


Author(s):  
M. Mehranfar ◽  
H. Arefi ◽  
F. Alidoost

Abstract. This paper presents a projection-based method for 3D bridge modeling using dense point clouds generated from drone-based images. The proposed workflow consists of hierarchical steps including point cloud segmentation, modeling of individual elements, and merging of individual models to generate the final 3D model. First, a fuzzy clustering algorithm including the height values and geometrical-spectral features is employed to segment the input point cloud into the main bridge elements. In the next step, a 2D projection-based reconstruction technique is developed to generate a 2D model for each element. Next, the 3D models are reconstructed by extruding the 2D models orthogonally to the projection plane. Finally, the reconstruction process is completed by merging individual 3D models and forming an integrated 3D model of the bridge structure in a CAD format. The results demonstrate the effectiveness of the proposed method to generate 3D models automatically with a median error of about 0.025 m between the elements’ dimensions in the reference and reconstructed models for two different bridge datasets.


Sign in / Sign up

Export Citation Format

Share Document