scholarly journals SENSOR EVALUATION FOR CRACK DETECTION IN CONCRETE BRIDGES

Author(s):  
D. Merkle ◽  
A. Schmitt ◽  
A. Reiterer

Abstract. Bridges are one of the most critical traffic infrastructure objects, therefore it is necessary to monitor them at regular intervals. Nowadays, this monitoring is made manually by visual inspection. In recent projects, the authors are developing automated crack detection systems to support the inspector. In this pre-study, different sensors, like different camera systems for photogrammetry, a laser scanner, and a laser triangulation system are evaluated for crack detection based on a defined required minimum crack width of 0.2 mm. The used test object is a blasted concrete plate, sized 70 cm × 70 cm × 5 cm and placed in an outdoor environment. The results of the data acquisition with the different sensors are point clouds, which make the results comparable. The point cloud from the chosen laser scanner is not sufficient for the required crack width even at a low speed of 1 m/s. The RGB or intensity information of the photogrammetric point clouds, even based on a low-cost smartphone camera, contain the targeted cracks. The authors advise against using only the 3D information of the photogrammetric point clouds for crack detection due to noise. The laser triangulation system delivers the best results in both intensity and 3D information. The low weight of camera systems makes photogrammetry to the preferred method for an unmanned aerial vehicle (UAV). In the future, the authors aim for crack detection based on the 2D images, automated by using machine learning, and crack localisation by using structure from motion (SfM) or a positioning system.

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
J. Chen ◽  
O. E. Mora ◽  
K. C. Clarke

<p><strong>Abstract.</strong> In recent years, growing public interest in three-dimensional technology has led to the emergence of affordable platforms that can capture 3D scenes for use in a wide range of consumer applications. These platforms are often widely available, inexpensive, and can potentially find dual use in taking measurements of indoor spaces for creating indoor maps. Their affordability, however, usually comes at the cost of reduced accuracy and precision, which becomes more apparent when these instruments are pushed to their limits to scan an entire room. The point cloud measurements they produce often exhibit systematic drift and random noise that can make performing comparisons with accurate data difficult, akin to trying to compare a fuzzy trapezoid to a perfect square with sharp edges. This paper outlines a process for assessing the accuracy and precision of these imperfect point clouds in the context of indoor mapping by integrating techniques such as the extended Gaussian image, iterative closest point registration, and histogram thresholding. A case study is provided at the end to demonstrate use of this process for evaluating the performance of the Scanse Sweep 3D, an ultra-low cost panoramic laser scanner.</p>


Author(s):  
E. Lachat ◽  
T. Landes ◽  
P. Grussenmeyer

The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.


Author(s):  
J. Kern ◽  
M. Weinmann ◽  
S. Wursthorn

After scanning or reconstructing the geometry of objects, we need to inspect the result of our work. Are there any parts missing? Is every detail covered in the desired quality? We typically do this by looking at the resulting point clouds or meshes of our objects on-screen. What, if we could see the information directly visualized on the object itself? Augmented reality is the generic term for bringing virtual information into our real environment. In our paper, we show how we can project any 3D information like thematic visualizations or specific monitoring information with reference to our object onto the object’s surface itself, thus augmenting it with additional information. For small objects that could for instance be scanned in a laboratory, we propose a low-cost method involving a projector-camera system to solve this task. The user only needs a calibration board with coded fiducial markers to calibrate the system and to estimate the projector’s pose later on for projecting textures with information onto the object’s surface. Changes within the projected 3D information or of the projector’s pose will be applied in real-time. Our results clearly reveal that such a simple setup will deliver a good quality of the augmented information.


Author(s):  
Atticus E. L. Stovall ◽  
Jeff W Atkins

The increasingly affordable price point of terrestrial laser scanners has led to a democratization of instrument availability, but the most common low-cost instruments have yet to be compared in terms of the consistency to measure forest structural attributes. Here, we compared two low-cost terrestrial laser scanners (TLS): the Leica BLK360 and the Faro Focus 120 3D. We evaluate the instruments in terms of point cloud quality, forest inventory estimates, tree-model reconstruction, and foliage profile reconstruction. Our direct comparison of the point clouds showed reduced noise in filtered Leica data. Tree diameter and height were consistent across instruments (4.4% and 1.4% error, respectively). Volumetric tree models were less consistent across instruments, with ~29% bias, depending on model reconstruction quality. In the process of comparing foliage profiles, we conducted a sensitivity analysis of factors affecting foliage profile estimates, showing a minimal effect from instrument maximum range (for forests less than ~50 m in height) and surprisingly little impact from degraded scan resolution. Filtered unstructured TLS point clouds must be artificially re-gridded to provide accurate foliage profiles. The factors evaluated in this comparison point towards necessary considerations for future low-cost laser scanner development and application in detecting forest structural parameters.


Author(s):  
S. Hosseinyalamdary ◽  
A. Yilmaz

Laser scanner point cloud has been emerging in Photogrammetry and computer vision to achieve high level tasks such as object tracking, object recognition and scene understanding. However, low cost laser scanners are noisy, sparse and prone to systematic errors. This paper proposes a novel 3D super resolution approach to reconstruct surface of the objects in the scene. This method works on sparse, unorganized point clouds and has superior performance over other surface recovery approaches. Since the proposed approach uses anisotropic diffusion equation, it does not deteriorate the object boundaries and it preserves topology of the object.


Materials ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5187
Author(s):  
Víctor Meana ◽  
Eduardo Cuesta ◽  
Braulio J. Álvarez

To ensure that measurements can be made with non-contact metrology technologies, it is necessary to use verification and calibration procedures using precision artefacts as reference elements. In this environment, the need for increasingly accurate but also more cost-effective calibration artefacts is a clear demand in industry. The aim of this work is to demonstrate the feasibility of using low-cost precision spheres as reference artefacts in calibration and verification procedures of non-contact metrological equipment. Specifically, low-cost precision stainless steel spheres are used as reference artefacts. Obviously, for such spheres to be used as standard artefacts, it is necessary to change their optical behavior by removing their high brightness. For this purpose, the spheres are subjected to a manual sandblasting process, which is also a very low-cost process. The equipment used to validate the experiment is a laser triangulation sensor mounted on a Coordinate Measuring Machine (CMM). The CMM touch probe, which is much more accurate, will be used as a device for measuring the influence of sandblasting on the spheres. Subsequently, the influence of this post-processing is also checked with the laser triangulation sensor. Ultimately, the improvement in the quality of the point clouds captured by the laser sensor will be tested after removing the brightness, which distorts and reduces the quantity of points as well as the quality of the point clouds. In addition to the number of points obtained, the parameters used to study the effect of sandblasting on each sphere, both in contact probing and laser scanning, are the measured diameter, the form error, as well as the standard deviation of the point cloud regarding the best-fit sphere.


Author(s):  
A. Murtiyoso ◽  
P. Grussenmeyer ◽  
S. Guillemin ◽  
G. Prilaux

The Battle of Vimy Ridge was a military engagement between the Canadian Corps and the German Empire during the Great War (1914-1918). In this battle, Canadian troops fought as a single unit and won the day. It marked an important point in Canadian history as a nation. The year 2017 marks the centenary of this battle. In commemoration of this event, the Pas-de-Calais Departmental Council financed a 3D recording mission for one of the underground tunnels (souterraines) used as refuge by the Canadian soldiers several weeks prior to the battle. A combination of Terrestrial Laser Scanner (TLS) and close-range photogrammetry techniques was employed in order to document not only the souterraine, but also the various carvings and graffitis created by the soldiers on its walls. The resulting point clouds were registered to the French national geodetic system, and then meshed and textured in order to create a precise 3D model of the souterraine. In this paper, the workflow taken during the project as well as several results will be discussed. In the end, the resulting 3D model was used to create derivative products such as maps, section profiles, and also virtual visit videos. The latter helps the dissemination of the 3D information and thus aids in the preservation of the memory of the Great War for Canada.


Sign in / Sign up

Export Citation Format

Share Document