textured models
Recently Published Documents


TOTAL DOCUMENTS

31
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 11 (18) ◽  
pp. 8750
Author(s):  
Styliani Verykokou ◽  
Argyro-Maria Boutsi ◽  
Charalabos Ioannidis

Mobile Augmented Reality (MAR) is designed to keep pace with high-end mobile computing and their powerful sensors. This evolution excludes users with low-end devices and network constraints. This article presents ModAR, a hybrid Android prototype that expands the MAR experience to the aforementioned target group. It combines feature-based image matching and pose estimation with fast rendering of 3D textured models. Planar objects of the real environment are used as pattern images for overlaying users’ meshes or the app’s default ones. Since ModAR is based on the OpenCV C++ library at Android NDK and OpenGL ES 2.0 graphics API, there are no dependencies on additional software, operating system version or model-specific hardware. The developed 3D graphics engine implements optimized vertex-data rendering with a combination of data grouping, synchronization, sub-texture compression and instancing for limited CPU/GPU resources and a single-threaded approach. It achieves up to 3 × speed-up compared to standard index rendering, and AR overlay of a 50 K vertices 3D model in less than 30 s. Several deployment scenarios on pose estimation demonstrate that the oriented FAST detector with an upper threshold of features per frame combined with the ORB descriptor yield best results in terms of robustness and efficiency, achieving a 90% reduction of image matching time compared to the time required by the AGAST detector and the BRISK descriptor, corresponding to pattern recognition accuracy of above 90% for a wide range of scale changes, regardless of any in-plane rotations and partial occlusions of the pattern.


2020 ◽  
Vol 83 ◽  
pp. 101943
Author(s):  
Andrea Maggiordomo ◽  
Federico Ponchio ◽  
Paolo Cignoni ◽  
Marco Tarini
Keyword(s):  

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1610
Author(s):  
Xiaoyu Liu ◽  
Shirley J. Dyke ◽  
Chul Min Yeum ◽  
Ilias Bilionis ◽  
Ali Lenjani ◽  
...  

Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. The inability to document the images’ locations hinders the analysis, organization, and documentation of these images as they lack sufficient spatial context. In this work, we develop a methodology to localize images and link them to locations on a structural drawing. A stream of images can readily be gathered along the path taken through a building using a compact camera. These images may be used to compute a relative location of each image in a 3D point cloud model, which is reconstructed using a visual odometry algorithm. The images may also be used to create local 3D textured models for building-components-of-interest using a structure-from-motion algorithm. A parallel set of images that are collected for building assessment is linked to the image stream using time information. By projecting the point cloud model to the structural drawing, the images can be overlaid onto the drawing, providing clear context information necessary to make use of those images. Additionally, components- or damage-of-interest captured in these images can be reconstructed in 3D, enabling detailed assessments having sufficient geospatial context. The technique is demonstrated by emulating post-event building assessment and data collection in a real building.


Author(s):  
Angel-Iván García-Moreno

Abstract Three-dimensional urban reconstruction requires the combination of data from different sensors, such as cameras, inertial systems, GPS, and laser sensors. In this technical report, a complete system for the generation of textured volumetric global maps (deep vision) is presented. Our acquisition platform is terrestrial and moves through different urban environments digitizing them. The report is focused on describing the three main problems identified in this type of works. (1) The acquisition of three-dimensional data with high precision, (2) the extraction of the texture and its correlation with the 3D data, and (3) the generation of the surfaces that describe the components of the urban environment. It also describes the methods implemented to extrinsically calibrate the acquisition platform, as well as the methods developed to eliminate the radial and tangential image distortion; and the subsequent generation of a panoramic image. Procedures are developed for the sampling of 3D data and its smoothing. Subsequently, the process to generate textured global maps with a negligible uncertainty is developed and the results are presented. Finally, the process of surface generation and the post-process of eliminating certain holes/occlusions in the meshes are reported. In each section, results obtained are shown. Using the methods presented here for geometric and photorealistic reconstruction of urban environments, high-quality 3D models are generated. The results achieved the following objectives: generate global textured models that preserve the geometry of the scanned scenes.


Author(s):  
M. Gaiani ◽  
F. I. Apollonio ◽  
F. Fantini

<p><strong>Abstract.</strong> Smartphone camera technology has made significant improvements of sensors quality and software camera performance in recent years. Devices as Apple iPhone X and the Samsung Galaxy S9 Plus, allow to reach levels of image resolution, sharpness and color accuracy very close to prosumer SLR cameras, enabling also on-the-fly processing and procedures which were considered impossible to achieve until a few years ago. Starting from these premises, a series of issues and opportunities concerning smartphone application to artifacts documentation will be discussed. In particular, consistency and reliability of both shape and color representation achievable for small-medium artifacts belonging to exhibitions and museum collections. A low-cost, easy-to-use workflow based on low-cost widespread devices will be compared to consolidated digitization pipelines. The contribution focus is based on color accuracy of textured models achievable through smartphones by means of an internally developed application for the achievement of highly reliable developments of raw formats (.DNG) from Apple iPhone X. Color consistency will be calculated in terms of the mean camera chroma relative to the mean ideal chroma in the CIE color metric (&amp;Delta;E*<sub>00</sub>) as defined in 2000 by the CIE on the CIEXYZ chromaticity diagram.</p>


Minerals ◽  
2018 ◽  
Vol 8 (11) ◽  
pp. 518 ◽  
Author(s):  
Javier Fernández-Lozano ◽  
Alberto González-Díez ◽  
Gabriel Gutiérrez-Alonso ◽  
Rosa Carrasco ◽  
Javier Pedraza ◽  
...  

This contribution discusses the potential of UAV-assisted (unmanned aerial vehicles) photogrammetry for the study and preservation of mining heritage sites using the example of Roman gold mining infrastructure in northwestern Spain. The study area represents the largest gold area in Roman times and comprises 7 mining elements of interest that characterize the most representative examples of such ancient works. UAV technology provides a non-invasive procedure valuable for the acquisition of digital information in remote, difficult to access areas or under the risk of destruction. The proposed approach is a cost-effective, robust and rapid method for image processing in remote areas were no traditional surveying technologies are available. It is based on a combination of data provided by aerial orthoimage and LiDAR (Light Detection and Ranging) to improve the accuracy of UAV derived data. The results provide high-resolution orthomosaic, DEMs and 3D textured models that aim for the documentation of ancient mining scenarios, providing high-resolution digital information that improves the identification, description and interpretation of mining elements such as the hydraulic infrastructure, the presence of open-cast mines which exemplifies the different exploitation methods, and settlements. However, beyond the scientific and technical information provided by the data, the 3D documentation of ancient mining scenarios is a powerful tool for an effective and wider public diffusion ensuring the visualization, preservation and awareness over the importance and conservation of world mining heritage sites.


Author(s):  
Javier Gutiérrez Velayos ◽  
Manuel Rodríguez Martín ◽  
Fernando Herráez Garrido ◽  
Javier Velázquez Saornil

2018 ◽  
pp. 1072-1090 ◽  
Author(s):  
Tony Tung ◽  
Takashi Matsuyama

Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.


Author(s):  
L. Inzerillo ◽  
F. Di Paola

In In the last years there has been an increasing use of digital techniques for conservation and restoration purposes. Among these, a very dominant rule is played by the use of digital photogrammetry packages (Agisoft Photoscan, 3D Zephir) which allow to obtain in few steps 3D textured models of real objects. Combined with digital documentation technologies digital fabrication technologies can be employed in a variety of ways to assist in heritage documentation, conservation and dissemination.<br><br> This paper will give to practitioners an overview on the state of the art available technologies and a feasible workflow for optimizing point cloud and polygon mesh datasets for the purpose of fabrication using 3D printing. The goal is to give an important contribute to confer an automation aspect at the whole processing. We tried to individuate a workflow that should be applicable to several types of cases apart from small precautions. In our experimentation we used a DELTA WASP 2040 printer with PLA easyfil.


Sign in / Sign up

Export Citation Format

Share Document