scholarly journals A Benchmark of Popular Indoor 3D Reconstruction Technologies: Comparison of ARCore and RTAB-Map

Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2091
Author(s):  
Ádám Wolf ◽  
Péter Troll ◽  
Stefan Romeder-Finger ◽  
Andreas Archenti ◽  
Károly Széll ◽  
...  

The fast evolution in computational and sensor technologies brings previously niche solutions to a wider userbase. As such, 3D reconstruction technologies are reaching new use-cases in scientific and everyday areas where they were not present before. Cost-effective and easy-to-use solutions include camera-based 3D scanning techniques, such as photogrammetry. This paper provides an overview of the available solutions and discusses in detail the depth-image based Real-time Appearance-based Mapping (RTAB-Map) technique as well as a smartphone-based solution that utilises ARCore, the Augmented Reality (AR) framework of Google. To qualitatively compare the two 3D reconstruction technologies, a simple length measurement-based method was applied with a purpose-designed reference object. The captured data were then analysed by a processing algorithm. In addition to the experimental results, specific case studies are briefly discussed, evaluating the applicability based on the capabilities of the technologies. As such, the paper presents the use-case of interior surveying in an automated laboratory as well as an example for using the discussed techniques for landmark surveying. The major findings are that point clouds created with these technologies provide a direction- and shape-accurate model, but those contain mesh continuity errors, and the estimated scale factor has a large standard deviation.

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4628
Author(s):  
Xiaowen Teng ◽  
Guangsheng Zhou ◽  
Yuxuan Wu ◽  
Chenglong Huang ◽  
Wanjing Dong ◽  
...  

The three-dimensional reconstruction method using RGB-D camera has a good balance in hardware cost and point cloud quality. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a 3D reconstruction method using Azure Kinect to solve these inherent problems. Shoot color images, depth images and near-infrared images of the target from six perspectives by Azure Kinect sensor with black background. Multiply the binarization result of the 8-bit infrared image with the RGB-D image alignment result provided by Microsoft corporation, which can remove ghosting and most of the background noise. A neighborhood extreme filtering method is proposed to filter out the abrupt points in the depth image, by which the floating noise point and most of the outlier noise will be removed before generating the point cloud, and then using the pass-through filter eliminate rest of the outlier noise. An improved method based on the classic iterative closest point (ICP) algorithm is presented to merge multiple-views point clouds. By continuously reducing both the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the integral color point cloud. Many experiments on rapeseed plants show that the success rate of cloud registration is 92.5% and the point cloud accuracy obtained by this method is 0.789 mm, the time consuming of a integral scanning is 302 seconds, and with a good color restoration. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower when building a automatic scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of rapeseed and other crops phenotype.


Author(s):  
Rafael Radkowski

This paper introduces a 3D object tracking method for an augmented reality (AR) assembly assistance application. The tracking method relies on point clouds; it uses 3D feature descriptors and point cloud matching with the iterative closest points (ICP) algorithm. The feature descriptors identify an object in a point cloud; ICP align a reference object with this point cloud. The challenge is to achieve high fidelity while maintaining camera frame rates. The point cloud and reference object sampling density are one of the key factors to meet this challenge. In this research, three-point sampling methods and two-point cloud search algorithms were compared to assess their fidelity when tracking typical products of mechanical engineering. The results indicate that a uniform sampling maintains the best fidelity at camera frame rates.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1581
Author(s):  
Xiaolong Chen ◽  
Jian Li ◽  
Shuowen Huang ◽  
Hao Cui ◽  
Peirong Liu ◽  
...  

Cracks are one of the main distresses that occur on concrete surfaces. Traditional methods for detecting cracks based on two-dimensional (2D) images can be hampered by stains, shadows, and other artifacts, while various three-dimensional (3D) crack-detection techniques, using point clouds, are less affected in this regard but are limited by the measurement accuracy of the 3D laser scanner. In this study, we propose an automatic crack-detection method that fuses 3D point clouds and 2D images based on an improved Otsu algorithm, which consists of the following four major procedures. First, a high-precision registration of a depth image projected from 3D point clouds and 2D images is performed. Second, pixel-level image fusion is performed, which fuses the depth and gray information. Third, a rough crack image is obtained from the fusion image using the improved Otsu method. Finally, the connected domain labeling and morphological methods are used to finely extract the cracks. Experimentally, the proposed method was tested at multiple scales and with various types of concrete crack. The results demonstrate that the proposed method can achieve an average precision of 89.0%, recall of 84.8%, and F1 score of 86.7%, performing significantly better than the single image (average F1 score of 67.6%) and single point cloud (average F1 score of 76.0%) methods. Accordingly, the proposed method has high detection accuracy and universality, indicating its wide potential application as an automatic method for concrete-crack detection.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3848
Author(s):  
Xinyue Zhang ◽  
Gang Liu ◽  
Ling Jing ◽  
Siyao Chen

The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.


Author(s):  
Gilles Simon

It is generally accepted that Jan van Eyck was unaware of perspective. However, an a-contrario analysis of the vanishing points in five of his paintings, realized between 1432 and 1439, unveils a recurring fishbone-like pattern that could only emerge from the use of a polyscopic perspective machine with two degrees of freedom. A 3D reconstruction of Arnolfini Portrait compliant with this pattern suggests that van Eyck's device answered a both aesthetic and scientific questioning on how to represent space as closely as possible to human vision. This discovery makes van Eyck the father of today's immersive and nomadic creative media such as augmented reality and synthetic holography.


2006 ◽  
Vol 45 (10) ◽  
pp. 1450-1464 ◽  
Author(s):  
Sandra E. Yuter ◽  
David E. Kingsmill ◽  
Louisa B. Nance ◽  
Martin Löffler-Mang

Abstract Ground-based measurements of particle size and fall speed distributions using a Particle Size and Velocity (PARSIVEL) disdrometer are compared among samples obtained in mixed precipitation (rain and wet snow) and rain in the Oregon Cascade Mountains and in dry snow in the Rocky Mountains of Colorado. Coexisting rain and snow particles are distinguished using a classification method based on their size and fall speed properties. The bimodal distribution of the particles’ joint fall speed–size characteristics at air temperatures from 0.5° to 0°C suggests that wet-snow particles quickly make a transition to rain once melting has progressed sufficiently. As air temperatures increase to 1.5°C, the reduction in the number of very large aggregates with a diameter > 10 mm coincides with the appearance of rain particles larger than 6 mm. In this setting, very large raindrops appear to be the result of aggregrates melting with minimal breakup rather than formation by coalescence. In contrast to dry snow and rain, the fall speed for wet snow has a much weaker correlation between increasing size and increasing fall speed. Wet snow has a larger standard deviation of fall speed (120%–230% relative to dry snow) for a given particle size. The average fall speed for observed wet-snow particles with a diameter ≥ 2.4 mm is 2 m s−1 with a standard deviation of 0.8 m s−1. The large standard deviation is likely related to the coexistence of particles of similar physical size with different percentages of melting. These results suggest that different particle sizes are not required for aggregation since wet-snow particles of the same size can have different fall speeds. Given the large standard deviation of fall speeds in wet snow, the collision efficiency for wet snow is likely larger than that of dry snow. For particle sizes between 1 and 10 mm in diameter within mixed precipitation, rain constituted 1% of the particles by volume within the isothermal layer at 0°C and 4% of the particles by volume for the region just below the isothermal layer where air temperatures rise from 0° to 0.5°C. As air temperatures increased above 0.5°C, the relative proportions of rain versus snow particles shift dramatically and raindrops become dominant. The value of 0.5°C for the sharp transition in volume fraction from snow to rain is slightly lower than the range from 1.1° to 1.7°C often used in hydrological models.


Author(s):  
L. Zhang ◽  
P. van Oosterom ◽  
H. Liu

Abstract. Point clouds have become one of the most popular sources of data in geospatial fields due to their availability and flexibility. However, because of the large amount of data and the limited resources of mobile devices, the use of point clouds in mobile Augmented Reality applications is still quite limited. Many current mobile AR applications of point clouds lack fluent interactions with users. In our paper, a cLoD (continuous level-of-detail) method is introduced to filter the number of points to be rendered considerably, together with an adaptive point size rendering strategy, thus improve the rendering performance and remove visual artifacts of mobile AR point cloud applications. Our method uses a cLoD model that has an ideal distribution over LoDs, with which can remove unnecessary points without sudden changes in density as present in the commonly used discrete level-of-detail approaches. Besides, camera position, orientation and distance from the camera to point cloud model is taken into consideration as well. With our method, good interactive visualization of point clouds can be realized in the mobile AR environment, with both nice visual quality and proper resource consumption.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gregor Luetzenburg ◽  
Aart Kroon ◽  
Anders A. Bjørk

AbstractTraditionally, topographic surveying in earth sciences requires high financial investments, elaborate logistics, complicated training of staff and extensive data processing. Recently, off-the-shelf drones with optical sensors already reduced the costs for obtaining a high-resolution dataset of an Earth surface considerably. Nevertheless, costs and complexity associated with topographic surveying are still high. In 2020, Apple Inc. released the iPad Pro 2020 and the iPhone 12 Pro with novel build-in LiDAR sensors. Here we investigate the basic technical capabilities of the LiDAR sensors and we test the application at a coastal cliff in Denmark. The results are compared to state-of-the-art Structure from Motion Multi-View Stereo (SfM MVS) point clouds. The LiDAR sensors create accurate high-resolution models of small objects with a side length > 10 cm with an absolute accuracy of ± 1 cm. 3D models with the dimensions of up to 130 × 15 × 10 m of a coastal cliff with an absolute accuracy of ± 10 cm are compiled. Overall, the versatility in handling outweighs the range limitations, making the Apple LiDAR devices cost-effective alternatives to established techniques in remote sensing with possible fields of application for a wide range of geo-scientific areas and teaching.


Author(s):  
E. Sánchez-García ◽  
A. Balaguer-Beser ◽  
R. Taborda ◽  
J. E. Pardo-Pascual

Beach and fluvial systems are highly dynamic environments, being constantly modified by the action of different natural and anthropic phenomena. To understand their behaviour and to support a sustainable management of these fragile environments, it is very important to have access to cost-effective tools. These methods should be supported on cutting-edge technologies that allow monitoring the dynamics of the natural systems with high periodicity and repeatability at different temporal and spatial scales instead the tedious and expensive field-work that has been carried out up to date. The work herein presented analyses the potential of terrestrial photogrammetry to describe beach morphology. Data processing and generation of high resolution 3D point clouds and derived DEMs is supported by the commercial Agisoft PhotoScan. Model validation is done by comparison of the differences in the elevation among the photogrammetric point cloud and the GPS data along different beach profiles. Results obtained denote the potential that the photogrammetry 3D modelling has to monitor morphological changes and natural events getting differences between 6 and 25 cm. Furthermore, the usefulness of these techniques to control the layout of a fluvial system is tested by the performance of some modeling essays in a hydraulic pilot channel.


2021 ◽  
Vol 11 (18) ◽  
pp. 8590
Author(s):  
Zhihan Lv ◽  
Jing-Yan Wang ◽  
Neeraj Kumar ◽  
Jaime Lloret

Augmented Reality is a key technology that will facilitate a major paradigm shift in the way users interact with data and has only just recently been recognized as a viable solution for solving many critical needs [...]


Sign in / Sign up

Export Citation Format

Share Document