scholarly journals Multimodal and Multiview Wound Monitoring with Mobile Devices

Photonics ◽  
2021 ◽  
Vol 8 (10) ◽  
pp. 424
Author(s):  
Evelyn Gutierrez ◽  
Benjamín Castañeda ◽  
Sylvie Treuillet ◽  
Ivan Hernandez

Along with geometric and color indicators, thermography is another valuable source of information for wound monitoring. The interaction of geometry with thermography can provide predictive indicators of wound evolution; however, existing processes are focused on the use of high-cost devices with a static configuration, which restricts the scanning of large surfaces. In this study, we propose the use of commercial devices, such as mobile devices and portable thermography, to integrate information from different wavelengths onto the surface of a 3D model. A handheld acquisition is proposed in which color images are used to create a 3D model by using Structure from Motion (SfM), and thermography is incorporated into the 3D surface through a pose estimation refinement based on optimizing the temperature correlation between multiple views. Thermal and color 3D models were successfully created for six patients with multiple views from a low-cost commercial device. The results show the successful application of the proposed methodology where thermal mapping on 3D models is not limited in the scanning area and can provide consistent information between multiple thermal camera views. Further work will focus on studying the quantitative metrics obtained by the multi-view 3D models created with the proposed methodology.

2015 ◽  
Vol 6 (12) ◽  
pp. 58 ◽  
Author(s):  
José L. Caro ◽  
Salvador Hansen

<p>Everyone knows the importance of new technologies and the growth they have had in mobile devices. Today in the field of study and dissemination of cultural heritage (including archaeological), the use of digital 3D models and associated technologies are a tool to increase the registration quality and consequently a better basis for interpretation and dissemination for cultural tourism, education and research. Within this area is gaining positions photogrammetry over other technologies due to its low cost. We can generate 3D models from forografí as through a set of algorithms that are able to obtain very approximate models and very realistic textures. In this paper we propose the use of game-engines to incorporate one element diffusion: the ability to navigate the 3D model realistically. As a case study we use a Menga dolmen that will serve as a study and demonstration of the techniques employed. </p>


Author(s):  
M. Abdelaziz ◽  
M. Elsayed

<p><strong>Abstract.</strong> Underwater photogrammetry in archaeology in Egypt is a completely new experience applied for the first time on the submerged archaeological site of the lighthouse of Alexandria situated on the eastern extremity of the ancient island of Pharos at the foot of Qaitbay Fort at a depth of 2 to 9 metres. In 2009/2010, the CEAlex launched a 3D photogrammetry data-gathering programme for the virtual reassembly of broken artefacts. In 2013 and the beginning of 2014, with the support of the Honor Frost Foundation, methods were developed and refined to acquire manual photographic data of the entire underwater site of Qaitbay using a DSLR camera, simple and low cost materials to obtain a digital surface model (DSM) of the submerged site of the lighthouse, and also to create 3D models of the objects themselves, such as statues, bases of statues and architectural elements. In this paper we present the methodology used for underwater data acquisition, data processing and modelling in order to generate a DSM of the submerged site of Alexandria’s ancient lighthouse. Until 2016, only about 7200&amp;thinsp;m<sup>2</sup> of the submerged site, which exceeds more than 13000&amp;thinsp;m<sup>2</sup>, was covered. One of our main objectives in this project is to georeference the site since this would allow for a very precise 3D model and for correcting the orientation of the site as regards the real-world space.</p>


Author(s):  
Agnieszka Chmurzynska ◽  
Karolina Hejbudzka ◽  
Andrzej Dumalski

During the last years the softwares and applications that can produce 3D models using low-cost methods have become very popular. What is more, they can be successfully competitive with the classical methods. The most wellknown and applied technology used to create 3D models has been laser scanning so far. However it is still expensive because of the price of the device and software. That is why the universality and accessibility of this method is very limited. Hence, the new low cost methods of obtaining the data needed to generate 3D models appeare on the market and creating 3D models have become much easier and accessible to a wider group of people. Because of their advantages they can be competitive with the laser scanning. One of the methods uses digital photos to create 3D models. Available software allows us to create a model and object geometry. Also very popular in the gaming environment device – Kinect Sensor can be successfully used as a different method to create 3D models. This article presents basic issues of 3D modelling and application of various devices, which are commonly used in our life and they can be used to generate a 3D model as well. Their results are compared with the model derived from the laser scanning. The acquired results with graphic presentations and possible ways of applications are also presented in this paper.


2019 ◽  
Vol 10 (20) ◽  
pp. 70
Author(s):  
Gabriela Lorenzo ◽  
Luciano Lopez ◽  
Reinaldo A. Moralejo ◽  
Luis M. Del Papa

<p>Photogrammetry has recently been incorporated into archaeological research, replacing much more expensive techniques while still generating high resolution results. This technique converts two dimensional (2D) images into three-dimensional (3D) models, allowing for the complex analysis of geometric and spatial information. It has become one of the most used methods for the 3D recording of cultural heritage objects. Among its possible archaeological uses are: digitally documenting an archaeological dig at low cost, aiding the decision-making process (Dellepiane et al., 2013); spatial surveying of archaeological sites; 3D model generation of archaeological objects and digitisation of archaeological collections (Adami et al., 2018; Aparicio Resco et al., 2014; Cots et al., 2018; Iturbe et al., 2018; Moyano, 2017).</p><p>The objective of this paper is to show the applicability of 3D models based on SfM (Structure from Motion) photogrammetry for archaeofauna analyses. We created 3D models of four camelid (Lama glama) bone elements (skull, radius-ulna, metatarsus and proximal phalange), aiming to demonstrate the advantages of 3D models over 2D osteological guides, which are usually used to perform anatomical and systematic determination of specimens.</p><p>Photographs were taken with a 16 Megapixel Nikon D5100 DSLR camera mounted on a tripod, with the distance to the object ranging between 1 and 3 m and using a 50mm fixed lens. Each bone element was placed on a 1 m tall stool, with a green, high contrast background. Photographs were shot at regular intervals of 10-15º, moving in a circle. Sets of around 30 pictures were taken from three circumferences at vertical angles of 0º, 45º and 60º. In addition, some detailed and overhead shots were taken from the dorsal and ventral sides of each bone element. Each set of dorsal and ventral photos was imported to Agisoft Photoscan Professional. A workflow (Fig. 4) of alignment, tie point matching, high resolution 3D dense point cloud construction, and creation of a triangular mesh covered with a photographic texture was performed. Finally the dorsal and ventral models were aligned and merged and the 3D model was accurately scaled. In order to determine accuracy of the models, linear measurements were performed and compared to a digital gauge measurement of the physical bones, obtaining a difference of less than 0.5 mm.</p><p>Furthermore, five archaeological specimens were selected to compare our 3D models with the most commonly used 2D camelid atlas (Pacheco Torres et al., 1986; Sierpe, 2015). In the particular case of archaeofaunal analyses, where anatomical and systematic determination of the specimens is the key, digital photogrammetry has proven to be more effective than traditional 2D documentation methods. This is due to the fact that 2D osteological guides based on drawings or pictures lack the necessary viewing angles to perform an adequate and complete diagnosis of the specimens. Using new technology can deliver better results, producing more comprehensive information of the bone element, with great detail and geometrical precision and not limited to pictures or drawings at particular angles. In this paper we can see how 3D modelling with SfM-MVS (Structure from Motion-Multi View Stereo) allows the observation of an element from multiple angles. The possibility of zooming and rotating the models (Figs. 6g, 6h, 7d, 8c) improves the determination of the archaeological specimens.</p><p>Information on how the 3D model was produced is essential. A metadata file must include data on each bone element (anatomical and taxonomic) plus information on photographic quantity and quality. This file must also contain the software used to produce the model and the parameters and resolution of each step of the workflow (number of 3D points, mesh vertices, texture resolution and quantification of the error of the model). In short, 3D models are excellent tools for osteological guides.</p>


Author(s):  
G. Kontogianni ◽  
R. Chliverou ◽  
A. Koutsoudis ◽  
G. Pavlidis ◽  
A. Georgopoulos

The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.


Author(s):  
V. Katsichti ◽  
G. Kontogianni ◽  
A. Georgopoulos

Abstract. In archaeological excavations, many small fragments or artefacts are revealed whose fine details sometimes should be captured in 3D. In general, 3D documentation methods fall into two main categories: Range-Based modelling and Image-Based modelling. In Range Based modelling, a laser scanner (Time of Flight, Structured light, etc.) is used for the raw data acquisition in order to create the 3D model of an object. The above method is accurate enough but is still very expensive in terms of equipment. On the other hand, Image-Based modelling, is affordable because the equipment required is merely a camera with the appropriate lens, and possibly a turntable and a tripod. In this case, the 3D model of an object is created by suitable processing of images which are taken around the object with a large overlap. In this paper, emphasis is given on the effectiveness of 3D models of frail archaeological finds originate from the palatial site of Ayios Vasileios in Laconia in the south-eastern Peloponnese, using low-cost equipment and methods. The 3D model is also produced using various, mainly freeware, hence low-cost, software and the results are compared to those from a well-established commercial one.


Author(s):  
M. Canciani ◽  
E. Conigliaro ◽  
M. Del Grasso ◽  
P. Papalini ◽  
M. Saccone

The development of close-range photogrammetry has produced a lot of new possibility to study cultural heritage. 3D data acquired with conventional and low cost cameras can be used to document, investigate the full appearance, materials and conservation status, to help the restoration process and identify intervention priorities. At the same time, with 3D survey a lot of three-dimensional data are collected and analyzed by researchers, but there are a very few possibility of 3D output. The augmented reality is one of this possible output with a very low cost technology but a very interesting result. Using simple mobile technology (for iPad and Android Tablets) and shareware software (in the case presented “Augment”) it is possible to share and visualize a large number of 3D models with your own device. The case study presented is a part of an architecture graduate thesis, made in Rome at Department of Architecture of Roma Tre University. We have developed a photogrammetric survey to study the Aurelian Wall at Castra Praetoria in Rome. The surveys of 8000 square meters of surface have allowed to identify stratigraphy and construction phases of a complex portion of Aurelian Wall, specially about the Northern door of Castra. During this study, the data coming out of 3D survey (photogrammetric and topographic), are stored and used to create a reverse 3D model, or virtual reconstruction, of the Northern door of Castra. This virtual reconstruction shows the door in the Tiberian period, nowadays it's totally hidden by a curtain wall but, little and significative architectural details allow to know its original feature. The 3D model of the ancient walls has been mapped with the exact type of bricks and mortar, oriented and scaled according to the existing one to use augmented reality. Finally, two kind of application have been developed, one on site, were you can see superimposed the virtual reconstruction on the existing walls using the image recognition. On the other hand, to show the results also during the graduation day, the same application has been created in off-site condition using a poster.


2016 ◽  
Vol 41 (2) ◽  
pp. 210-214 ◽  
Author(s):  
Amaia Hernandez ◽  
Edward Lemaire

Background and Aim: Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person’s socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. Technique: 15 two-dimensional images of the socket’s interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. Discussion: 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.


2021 ◽  
pp. 000348942110240
Author(s):  
Peng You ◽  
Yi-Chun Carol Liu ◽  
Rodrigo C. Silva

Objective: Microtia reconstruction is technically challenging due to the intricate contours of the ear. It is common practice to use a two-dimensional tracing of the patient’s normal ear as a template for the reconstruction of the affected side. Recent advances in three-dimensional (3D) surface scanning and printing have expanded the ability to create surgical models preoperatively. This study aims to describe a simple and affordable process to fabricate patient-specific 3D ear models for use in the operating room. Study design: Applied basic research on a novel 3D optical scanning and fabrication pathway for microtia reconstruction. Setting: Tertiary care university hospital. Methods: Optical surface scanning of the patient’s normal ear was completed using a smartphone with facial recognition capability. The Heges application used the phone’s camera to capture the 3D image. The 3D model was digitally isolated and mirrored using the Meshmixer software and printed with a 3D printer (MonopriceTM Select Mini V2) using polylactic acid filaments. Results: The 3D model of the ear served as a helpful intraoperative reference and an adjunct to the traditional 2D template. Collectively, time for imaging acquisition, editing, and fabrication was approximately 3.5 hours. The upfront cost was around $210, and the recurring cost was approximately $0.35 per ear model. Conclusion: A novel, low-cost approach to fabricate customized 3D models of the ear is introduced. It is feasible to create individualized 3D models using currently available consumer technology. The low barrier to entry raises the possibility for clinicians to incorporate 3D printing into various clinical applications.


Author(s):  
A. Masiero ◽  
F. Fissore ◽  
M. Piragnolo ◽  
A. Guarnieri ◽  
F. Pirotti ◽  
...  

<p><strong>Abstract.</strong> The Worldwide spread of relatively low cost mobile devices embedded with dual rear cameras enables the possibility of exploiting smartphone stereo vision for producing 3D models. Despite such idea is quite attractive, the small baseline between the two cameras restricts the depth discrimination ability of this kind of stereo vision systems. This paper presents the results obtained with a smartphone stereo vision system by using two rear cameras with different focal length: this operating condition clearly reduces the matchable area. Nevertheless, 3D reconstruction is still possible and the obtained results are evaluated for several camera-object distances.</p>


Sign in / Sign up

Export Citation Format

Share Document