scholarly journals Fotogrametría SFM aplicada a la determinación taxonómica de restos arqueofaunísticos

2019 ◽  
Vol 10 (20) ◽  
pp. 70
Author(s):  
Gabriela Lorenzo ◽  
Luciano Lopez ◽  
Reinaldo A. Moralejo ◽  
Luis M. Del Papa

<p>Photogrammetry has recently been incorporated into archaeological research, replacing much more expensive techniques while still generating high resolution results. This technique converts two dimensional (2D) images into three-dimensional (3D) models, allowing for the complex analysis of geometric and spatial information. It has become one of the most used methods for the 3D recording of cultural heritage objects. Among its possible archaeological uses are: digitally documenting an archaeological dig at low cost, aiding the decision-making process (Dellepiane et al., 2013); spatial surveying of archaeological sites; 3D model generation of archaeological objects and digitisation of archaeological collections (Adami et al., 2018; Aparicio Resco et al., 2014; Cots et al., 2018; Iturbe et al., 2018; Moyano, 2017).</p><p>The objective of this paper is to show the applicability of 3D models based on SfM (Structure from Motion) photogrammetry for archaeofauna analyses. We created 3D models of four camelid (Lama glama) bone elements (skull, radius-ulna, metatarsus and proximal phalange), aiming to demonstrate the advantages of 3D models over 2D osteological guides, which are usually used to perform anatomical and systematic determination of specimens.</p><p>Photographs were taken with a 16 Megapixel Nikon D5100 DSLR camera mounted on a tripod, with the distance to the object ranging between 1 and 3 m and using a 50mm fixed lens. Each bone element was placed on a 1 m tall stool, with a green, high contrast background. Photographs were shot at regular intervals of 10-15º, moving in a circle. Sets of around 30 pictures were taken from three circumferences at vertical angles of 0º, 45º and 60º. In addition, some detailed and overhead shots were taken from the dorsal and ventral sides of each bone element. Each set of dorsal and ventral photos was imported to Agisoft Photoscan Professional. A workflow (Fig. 4) of alignment, tie point matching, high resolution 3D dense point cloud construction, and creation of a triangular mesh covered with a photographic texture was performed. Finally the dorsal and ventral models were aligned and merged and the 3D model was accurately scaled. In order to determine accuracy of the models, linear measurements were performed and compared to a digital gauge measurement of the physical bones, obtaining a difference of less than 0.5 mm.</p><p>Furthermore, five archaeological specimens were selected to compare our 3D models with the most commonly used 2D camelid atlas (Pacheco Torres et al., 1986; Sierpe, 2015). In the particular case of archaeofaunal analyses, where anatomical and systematic determination of the specimens is the key, digital photogrammetry has proven to be more effective than traditional 2D documentation methods. This is due to the fact that 2D osteological guides based on drawings or pictures lack the necessary viewing angles to perform an adequate and complete diagnosis of the specimens. Using new technology can deliver better results, producing more comprehensive information of the bone element, with great detail and geometrical precision and not limited to pictures or drawings at particular angles. In this paper we can see how 3D modelling with SfM-MVS (Structure from Motion-Multi View Stereo) allows the observation of an element from multiple angles. The possibility of zooming and rotating the models (Figs. 6g, 6h, 7d, 8c) improves the determination of the archaeological specimens.</p><p>Information on how the 3D model was produced is essential. A metadata file must include data on each bone element (anatomical and taxonomic) plus information on photographic quantity and quality. This file must also contain the software used to produce the model and the parameters and resolution of each step of the workflow (number of 3D points, mesh vertices, texture resolution and quantification of the error of the model). In short, 3D models are excellent tools for osteological guides.</p>

2016 ◽  
Vol 41 (2) ◽  
pp. 210-214 ◽  
Author(s):  
Amaia Hernandez ◽  
Edward Lemaire

Background and Aim: Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person’s socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. Technique: 15 two-dimensional images of the socket’s interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. Discussion: 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.


Author(s):  
M. Abdelaziz ◽  
M. Elsayed

<p><strong>Abstract.</strong> Underwater photogrammetry in archaeology in Egypt is a completely new experience applied for the first time on the submerged archaeological site of the lighthouse of Alexandria situated on the eastern extremity of the ancient island of Pharos at the foot of Qaitbay Fort at a depth of 2 to 9 metres. In 2009/2010, the CEAlex launched a 3D photogrammetry data-gathering programme for the virtual reassembly of broken artefacts. In 2013 and the beginning of 2014, with the support of the Honor Frost Foundation, methods were developed and refined to acquire manual photographic data of the entire underwater site of Qaitbay using a DSLR camera, simple and low cost materials to obtain a digital surface model (DSM) of the submerged site of the lighthouse, and also to create 3D models of the objects themselves, such as statues, bases of statues and architectural elements. In this paper we present the methodology used for underwater data acquisition, data processing and modelling in order to generate a DSM of the submerged site of Alexandria’s ancient lighthouse. Until 2016, only about 7200&amp;thinsp;m<sup>2</sup> of the submerged site, which exceeds more than 13000&amp;thinsp;m<sup>2</sup>, was covered. One of our main objectives in this project is to georeference the site since this would allow for a very precise 3D model and for correcting the orientation of the site as regards the real-world space.</p>


Author(s):  
J. Kang ◽  
I. Lee

Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.


Author(s):  
Agnieszka Chmurzynska ◽  
Karolina Hejbudzka ◽  
Andrzej Dumalski

During the last years the softwares and applications that can produce 3D models using low-cost methods have become very popular. What is more, they can be successfully competitive with the classical methods. The most wellknown and applied technology used to create 3D models has been laser scanning so far. However it is still expensive because of the price of the device and software. That is why the universality and accessibility of this method is very limited. Hence, the new low cost methods of obtaining the data needed to generate 3D models appeare on the market and creating 3D models have become much easier and accessible to a wider group of people. Because of their advantages they can be competitive with the laser scanning. One of the methods uses digital photos to create 3D models. Available software allows us to create a model and object geometry. Also very popular in the gaming environment device – Kinect Sensor can be successfully used as a different method to create 3D models. This article presents basic issues of 3D modelling and application of various devices, which are commonly used in our life and they can be used to generate a 3D model as well. Their results are compared with the model derived from the laser scanning. The acquired results with graphic presentations and possible ways of applications are also presented in this paper.


2019 ◽  
Vol 11 (1) ◽  
pp. 65 ◽  
Author(s):  
Marek W. Ewertowski ◽  
Aleksandra M. Tomczyk ◽  
David J. A. Evans ◽  
David H. Roberts ◽  
Wojciech Ewertowski

This study presents the operational framework for rapid, very-high resolution mapping of glacial geomorphology, with the use of budget Unmanned Aerial Vehicles and a structure-from-motion approach. The proposed workflow comprises seven stages: (1) Preparation and selection of the appropriate platform; (2) transport; (3) preliminary on-site activities (including optional ground-control-point collection); (4) pre-flight setup and checks; (5) conducting the mission; (6) data processing; and (7) mapping and change detection. The application of the proposed framework has been illustrated by a mapping case study on the glacial foreland of Hørbyebreen, Svalbard, Norway. A consumer-grade quadcopter (DJI Phantom) was used to collect the data, while images were processed using the structure-from-motion approach. The resultant orthomosaic (1.9 cm ground sampling distance—GSD) and digital elevation model (7.9 cm GSD) were used to map the glacial-related landforms in detail. It demonstrated the applicability of the proposed framework to map and potentially monitor detailed changes in a rapidly evolving proglacial environment, using a low-cost approach. Its coverage of multiple aspects ensures that the proposed framework is universal and can be applied in a broader range of settings.


Author(s):  
G. Kontogianni ◽  
R. Chliverou ◽  
A. Koutsoudis ◽  
G. Pavlidis ◽  
A. Georgopoulos

The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.


Author(s):  
V. Katsichti ◽  
G. Kontogianni ◽  
A. Georgopoulos

Abstract. In archaeological excavations, many small fragments or artefacts are revealed whose fine details sometimes should be captured in 3D. In general, 3D documentation methods fall into two main categories: Range-Based modelling and Image-Based modelling. In Range Based modelling, a laser scanner (Time of Flight, Structured light, etc.) is used for the raw data acquisition in order to create the 3D model of an object. The above method is accurate enough but is still very expensive in terms of equipment. On the other hand, Image-Based modelling, is affordable because the equipment required is merely a camera with the appropriate lens, and possibly a turntable and a tripod. In this case, the 3D model of an object is created by suitable processing of images which are taken around the object with a large overlap. In this paper, emphasis is given on the effectiveness of 3D models of frail archaeological finds originate from the palatial site of Ayios Vasileios in Laconia in the south-eastern Peloponnese, using low-cost equipment and methods. The 3D model is also produced using various, mainly freeware, hence low-cost, software and the results are compared to those from a well-established commercial one.


Author(s):  
M. Canciani ◽  
E. Conigliaro ◽  
M. Del Grasso ◽  
P. Papalini ◽  
M. Saccone

The development of close-range photogrammetry has produced a lot of new possibility to study cultural heritage. 3D data acquired with conventional and low cost cameras can be used to document, investigate the full appearance, materials and conservation status, to help the restoration process and identify intervention priorities. At the same time, with 3D survey a lot of three-dimensional data are collected and analyzed by researchers, but there are a very few possibility of 3D output. The augmented reality is one of this possible output with a very low cost technology but a very interesting result. Using simple mobile technology (for iPad and Android Tablets) and shareware software (in the case presented “Augment”) it is possible to share and visualize a large number of 3D models with your own device. The case study presented is a part of an architecture graduate thesis, made in Rome at Department of Architecture of Roma Tre University. We have developed a photogrammetric survey to study the Aurelian Wall at Castra Praetoria in Rome. The surveys of 8000 square meters of surface have allowed to identify stratigraphy and construction phases of a complex portion of Aurelian Wall, specially about the Northern door of Castra. During this study, the data coming out of 3D survey (photogrammetric and topographic), are stored and used to create a reverse 3D model, or virtual reconstruction, of the Northern door of Castra. This virtual reconstruction shows the door in the Tiberian period, nowadays it's totally hidden by a curtain wall but, little and significative architectural details allow to know its original feature. The 3D model of the ancient walls has been mapped with the exact type of bricks and mortar, oriented and scaled according to the existing one to use augmented reality. Finally, two kind of application have been developed, one on site, were you can see superimposed the virtual reconstruction on the existing walls using the image recognition. On the other hand, to show the results also during the graduation day, the same application has been created in off-site condition using a poster.


2019 ◽  
Vol 7 (1) ◽  
pp. 45-66 ◽  
Author(s):  
Ankit Kumar Verma ◽  
Mary Carol Bourke

Abstract. We have generated sub-millimetre-resolution DEMs of weathered rock surfaces using SfM photogrammetry techniques. We apply a close-range method based on structure-from-motion (SfM) photogrammetry in the field and use it to generate high-resolution topographic data for weathered boulders and bedrock. The method was pilot tested on extensively weathered Triassic Moenkopi sandstone outcrops near Meteor Crater in Arizona. Images were taken in the field using a consumer-grade DSLR camera and were processed in commercially available software to build dense point clouds. The point clouds were registered to a local 3-D coordinate system (x, y, z), which was developed using a specially designed triangle-coded control target and then exported as digital elevation models (DEMs). The accuracy of the DEMs was validated under controlled experimental conditions. A number of checkpoints were used to calculate errors. We also evaluated the effects of image and camera parameters on the accuracy of our DEMs. We report a horizontal error of 0.5 mm and vertical error of 0.3 mm in our experiments. Our approach provides a low-cost method for obtaining very high-resolution topographic data on weathered rock surfaces (area < 10 m2). The results from our case study confirm the efficacy of the method at this scale and show that the data acquisition equipment is sufficiently robust and portable. This is particularly important for field conditions in remote locations or steep terrain where portable and efficient methods are required.


2018 ◽  
pp. 1338-1380
Author(s):  
Marco Gaiani

This chapter presents a framework and some solutions for color acquisition, management, rendering and assessment in Architectural Heritage (AH) 3D models construction from reality-based data. The aim is to illustrate easy, low-cost and rapid procedures that produce high visual accuracy of the image/model while being accessible to non-specialized users and unskilled operators, typically Heritage architects. The presented processing is developed in order to render reflectance properties with perceptual fidelity on many type of display and presents two main features: is based on an accurate color management system from acquisition to visualization and more accurate reflectance modeling; the color pipeline could be used inside well established 3D acquisition pipeline from laser scanner and/or photogrammetry. Besides it could be completely integrated in a Structure From Motion pipeline allowing simultaneous processing of color/shape data.


Sign in / Sign up

Export Citation Format

Share Document