scholarly journals UNDERWATER PHOTOGRAMMETRY DIGITAL SURFACE MODEL (DSM) OF THE SUBMERGED SITE OF THE ANCIENT LIGHTHOUSE NEAR QAITBAY FORT IN ALEXANDRIA, EGYPT

Author(s):  
M. Abdelaziz ◽  
M. Elsayed

<p><strong>Abstract.</strong> Underwater photogrammetry in archaeology in Egypt is a completely new experience applied for the first time on the submerged archaeological site of the lighthouse of Alexandria situated on the eastern extremity of the ancient island of Pharos at the foot of Qaitbay Fort at a depth of 2 to 9 metres. In 2009/2010, the CEAlex launched a 3D photogrammetry data-gathering programme for the virtual reassembly of broken artefacts. In 2013 and the beginning of 2014, with the support of the Honor Frost Foundation, methods were developed and refined to acquire manual photographic data of the entire underwater site of Qaitbay using a DSLR camera, simple and low cost materials to obtain a digital surface model (DSM) of the submerged site of the lighthouse, and also to create 3D models of the objects themselves, such as statues, bases of statues and architectural elements. In this paper we present the methodology used for underwater data acquisition, data processing and modelling in order to generate a DSM of the submerged site of Alexandria’s ancient lighthouse. Until 2016, only about 7200&amp;thinsp;m<sup>2</sup> of the submerged site, which exceeds more than 13000&amp;thinsp;m<sup>2</sup>, was covered. One of our main objectives in this project is to georeference the site since this would allow for a very precise 3D model and for correcting the orientation of the site as regards the real-world space.</p>

2021 ◽  
Vol 108 (Supplement_2) ◽  
Author(s):  
J Fletcher ◽  
T Heinze ◽  
T Wedel ◽  
D Miskovic

Abstract Introduction Cadaveric dissection remains an essential aspect of anatomical education but is not readily available to the majority of surgical trainees. 3D photogrammetry is the process of creating a 3D model from a series of 2D images and has tremendous potential in anatomical education. We describe a novel low-cost single-camera 3D photogrammetry technique to reconstruct cadaveric specimens as digital models. Method A formalin preserved hemipelvis was mounted on a turntable. Photos were taken sequentially at 5 o increments through 360° at three different fixed viewing angles (n = 216 photos) using a mirrorless camera with a 12-60mm f3.5-5.6 kit lens. Four surroundings LED standing lights were used to ensure diffuse ambient lighting of the specimen. Photos were imported into Agisoft Metashape software in order to generate a point cloud and produce the final virtual model composed of a polygon mesh. Results The specimen was successfully reconstructed and can be visualised at; https://sketchfab.com/3d-models/pelvic-sidewall-b76450b787824c968f864791d47318f2. The total processing time was 20 hrs. Conclusions Through this technique, we can produce accurate, interactive, and accessible 3D prosection models for surgical education. The method could be employed to establish a digital library of human anatomy for surgical training worldwide.


Author(s):  
Agnieszka Chmurzynska ◽  
Karolina Hejbudzka ◽  
Andrzej Dumalski

During the last years the softwares and applications that can produce 3D models using low-cost methods have become very popular. What is more, they can be successfully competitive with the classical methods. The most wellknown and applied technology used to create 3D models has been laser scanning so far. However it is still expensive because of the price of the device and software. That is why the universality and accessibility of this method is very limited. Hence, the new low cost methods of obtaining the data needed to generate 3D models appeare on the market and creating 3D models have become much easier and accessible to a wider group of people. Because of their advantages they can be competitive with the laser scanning. One of the methods uses digital photos to create 3D models. Available software allows us to create a model and object geometry. Also very popular in the gaming environment device – Kinect Sensor can be successfully used as a different method to create 3D models. This article presents basic issues of 3D modelling and application of various devices, which are commonly used in our life and they can be used to generate a 3D model as well. Their results are compared with the model derived from the laser scanning. The acquired results with graphic presentations and possible ways of applications are also presented in this paper.


2019 ◽  
Vol 10 (20) ◽  
pp. 70
Author(s):  
Gabriela Lorenzo ◽  
Luciano Lopez ◽  
Reinaldo A. Moralejo ◽  
Luis M. Del Papa

<p>Photogrammetry has recently been incorporated into archaeological research, replacing much more expensive techniques while still generating high resolution results. This technique converts two dimensional (2D) images into three-dimensional (3D) models, allowing for the complex analysis of geometric and spatial information. It has become one of the most used methods for the 3D recording of cultural heritage objects. Among its possible archaeological uses are: digitally documenting an archaeological dig at low cost, aiding the decision-making process (Dellepiane et al., 2013); spatial surveying of archaeological sites; 3D model generation of archaeological objects and digitisation of archaeological collections (Adami et al., 2018; Aparicio Resco et al., 2014; Cots et al., 2018; Iturbe et al., 2018; Moyano, 2017).</p><p>The objective of this paper is to show the applicability of 3D models based on SfM (Structure from Motion) photogrammetry for archaeofauna analyses. We created 3D models of four camelid (Lama glama) bone elements (skull, radius-ulna, metatarsus and proximal phalange), aiming to demonstrate the advantages of 3D models over 2D osteological guides, which are usually used to perform anatomical and systematic determination of specimens.</p><p>Photographs were taken with a 16 Megapixel Nikon D5100 DSLR camera mounted on a tripod, with the distance to the object ranging between 1 and 3 m and using a 50mm fixed lens. Each bone element was placed on a 1 m tall stool, with a green, high contrast background. Photographs were shot at regular intervals of 10-15º, moving in a circle. Sets of around 30 pictures were taken from three circumferences at vertical angles of 0º, 45º and 60º. In addition, some detailed and overhead shots were taken from the dorsal and ventral sides of each bone element. Each set of dorsal and ventral photos was imported to Agisoft Photoscan Professional. A workflow (Fig. 4) of alignment, tie point matching, high resolution 3D dense point cloud construction, and creation of a triangular mesh covered with a photographic texture was performed. Finally the dorsal and ventral models were aligned and merged and the 3D model was accurately scaled. In order to determine accuracy of the models, linear measurements were performed and compared to a digital gauge measurement of the physical bones, obtaining a difference of less than 0.5 mm.</p><p>Furthermore, five archaeological specimens were selected to compare our 3D models with the most commonly used 2D camelid atlas (Pacheco Torres et al., 1986; Sierpe, 2015). In the particular case of archaeofaunal analyses, where anatomical and systematic determination of the specimens is the key, digital photogrammetry has proven to be more effective than traditional 2D documentation methods. This is due to the fact that 2D osteological guides based on drawings or pictures lack the necessary viewing angles to perform an adequate and complete diagnosis of the specimens. Using new technology can deliver better results, producing more comprehensive information of the bone element, with great detail and geometrical precision and not limited to pictures or drawings at particular angles. In this paper we can see how 3D modelling with SfM-MVS (Structure from Motion-Multi View Stereo) allows the observation of an element from multiple angles. The possibility of zooming and rotating the models (Figs. 6g, 6h, 7d, 8c) improves the determination of the archaeological specimens.</p><p>Information on how the 3D model was produced is essential. A metadata file must include data on each bone element (anatomical and taxonomic) plus information on photographic quantity and quality. This file must also contain the software used to produce the model and the parameters and resolution of each step of the workflow (number of 3D points, mesh vertices, texture resolution and quantification of the error of the model). In short, 3D models are excellent tools for osteological guides.</p>


Author(s):  
G. Kontogianni ◽  
R. Chliverou ◽  
A. Koutsoudis ◽  
G. Pavlidis ◽  
A. Georgopoulos

The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.


Author(s):  
V. Katsichti ◽  
G. Kontogianni ◽  
A. Georgopoulos

Abstract. In archaeological excavations, many small fragments or artefacts are revealed whose fine details sometimes should be captured in 3D. In general, 3D documentation methods fall into two main categories: Range-Based modelling and Image-Based modelling. In Range Based modelling, a laser scanner (Time of Flight, Structured light, etc.) is used for the raw data acquisition in order to create the 3D model of an object. The above method is accurate enough but is still very expensive in terms of equipment. On the other hand, Image-Based modelling, is affordable because the equipment required is merely a camera with the appropriate lens, and possibly a turntable and a tripod. In this case, the 3D model of an object is created by suitable processing of images which are taken around the object with a large overlap. In this paper, emphasis is given on the effectiveness of 3D models of frail archaeological finds originate from the palatial site of Ayios Vasileios in Laconia in the south-eastern Peloponnese, using low-cost equipment and methods. The 3D model is also produced using various, mainly freeware, hence low-cost, software and the results are compared to those from a well-established commercial one.


Author(s):  
M. Canciani ◽  
E. Conigliaro ◽  
M. Del Grasso ◽  
P. Papalini ◽  
M. Saccone

The development of close-range photogrammetry has produced a lot of new possibility to study cultural heritage. 3D data acquired with conventional and low cost cameras can be used to document, investigate the full appearance, materials and conservation status, to help the restoration process and identify intervention priorities. At the same time, with 3D survey a lot of three-dimensional data are collected and analyzed by researchers, but there are a very few possibility of 3D output. The augmented reality is one of this possible output with a very low cost technology but a very interesting result. Using simple mobile technology (for iPad and Android Tablets) and shareware software (in the case presented “Augment”) it is possible to share and visualize a large number of 3D models with your own device. The case study presented is a part of an architecture graduate thesis, made in Rome at Department of Architecture of Roma Tre University. We have developed a photogrammetric survey to study the Aurelian Wall at Castra Praetoria in Rome. The surveys of 8000 square meters of surface have allowed to identify stratigraphy and construction phases of a complex portion of Aurelian Wall, specially about the Northern door of Castra. During this study, the data coming out of 3D survey (photogrammetric and topographic), are stored and used to create a reverse 3D model, or virtual reconstruction, of the Northern door of Castra. This virtual reconstruction shows the door in the Tiberian period, nowadays it's totally hidden by a curtain wall but, little and significative architectural details allow to know its original feature. The 3D model of the ancient walls has been mapped with the exact type of bricks and mortar, oriented and scaled according to the existing one to use augmented reality. Finally, two kind of application have been developed, one on site, were you can see superimposed the virtual reconstruction on the existing walls using the image recognition. On the other hand, to show the results also during the graduation day, the same application has been created in off-site condition using a poster.


2016 ◽  
Vol 41 (2) ◽  
pp. 210-214 ◽  
Author(s):  
Amaia Hernandez ◽  
Edward Lemaire

Background and Aim: Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person’s socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. Technique: 15 two-dimensional images of the socket’s interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. Discussion: 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.


2021 ◽  
pp. 000348942110240
Author(s):  
Peng You ◽  
Yi-Chun Carol Liu ◽  
Rodrigo C. Silva

Objective: Microtia reconstruction is technically challenging due to the intricate contours of the ear. It is common practice to use a two-dimensional tracing of the patient’s normal ear as a template for the reconstruction of the affected side. Recent advances in three-dimensional (3D) surface scanning and printing have expanded the ability to create surgical models preoperatively. This study aims to describe a simple and affordable process to fabricate patient-specific 3D ear models for use in the operating room. Study design: Applied basic research on a novel 3D optical scanning and fabrication pathway for microtia reconstruction. Setting: Tertiary care university hospital. Methods: Optical surface scanning of the patient’s normal ear was completed using a smartphone with facial recognition capability. The Heges application used the phone’s camera to capture the 3D image. The 3D model was digitally isolated and mirrored using the Meshmixer software and printed with a 3D printer (MonopriceTM Select Mini V2) using polylactic acid filaments. Results: The 3D model of the ear served as a helpful intraoperative reference and an adjunct to the traditional 2D template. Collectively, time for imaging acquisition, editing, and fabrication was approximately 3.5 hours. The upfront cost was around $210, and the recurring cost was approximately $0.35 per ear model. Conclusion: A novel, low-cost approach to fabricate customized 3D models of the ear is introduced. It is feasible to create individualized 3D models using currently available consumer technology. The low barrier to entry raises the possibility for clinicians to incorporate 3D printing into various clinical applications.


Author(s):  
E. Prado ◽  
M. Gómez-Ballesteros ◽  
A. Cobo ◽  
F. Sánchez ◽  
A. Rodriguez-Basalo ◽  
...  

<p><strong>Abstract.</strong> 3D reconstruction and virtual reality (VR) technology provide many opportunities for the documentation and dissemination of underwater cultural heritage. Advances in the development of underwater exploration technology have allowed for the first time to accurately reconstruct a complete 3D model of the cargo Río Miera in the Cantabrian Sea. Sunk on December 6, 1951 after a strong collision, the cargo ship Río Miera rests on a sandy bottom about 40 meters deep, very close to the Cantabrian coast. Located in an area of strong currents is a classic objective of the region for the most experienced divers. The survey was carried out this summer in R/V Ramón Margalef of the IEO, acquiring acoustic data with multibeam echo sounders and hundreds of images acquired by a remotely piloted underwater vehicle. The campaign is part of the PhotoMARE project - Underwater Photogrammetry for MArine Renewable Energy. This work describes the workflow regarding the survey, images and acoustic data acquisition, data processing, optic 3D point cloud color enhancement and acoustic and optic dataset merging procedure to obtain a complete 3D model of wreck Río Miera in Cantabrian Sea. Through this project, Spanish Institute of Oceanography – IEO have advanced – combining acoustic and image methods - in the generation of 3D models of archaeological sites and submerged structures.</p>


Author(s):  
A.-M. Boutsi ◽  
C. Ioannidis ◽  
S. Soile

<p><strong>Abstract.</strong> In the last decade 3D datasets of the Cultural Heritage field have become extremely rich and high detailed due to the evolution of the technologies they derive from. However, their online deployment, both for scientific and general public purposes is usually deficient in user interaction and multimedia integration. A single solution that efficiently addresses these issues is presented in this paper. The developed framework provides an interactive and lightweight visualization of high-resolution 3D models in a web browser. It is based on 3D Heritage Online Presenter (3DHOP) and Three.js library, implemented on top of WebGL API. 3DHOP capabilities are fully exploited and enhanced with new, high level functionalities. The approach is especially suited to complex geometry and it is adapted to archaeological and architectural environments. Thus, the multi-dimensional documentation of the archaeological site of Meteora, in central Greece is chosen as the case study. Various navigation paradigms are implemented and the data structure is enriched with the incorporation of multiple 3D model viewers. Furthermore, a metadata repository, comprises ortho-images, photographic documentation, video and text, is accessed straight forward through the inspection of the main 3D scene of Meteora by a system of interconnections.</p>


Sign in / Sign up

Export Citation Format

Share Document