Concrete crack detection using context‐aware deep semantic segmentation network

2019 ◽  
Vol 34 (11) ◽  
pp. 951-971 ◽  
Author(s):  
Xinxiang Zhang ◽  
Dinesh Rajan ◽  
Brett Story
Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4251 ◽  
Author(s):  
M. M. Manjurul Islam ◽  
Jong-Myon Kim

The visual inspection of massive civil infrastructure is a common trend for maintaining its reliability and structural health. However, this procedure, which uses human inspectors, requires long inspection times and relies on the subjective and empirical knowledge of the inspectors. To address these limitations, a machine vision-based autonomous crack detection method is proposed using a deep convolutional neural network (DCNN) technique. It consists of a fully convolutional neural network (FCN) with an encoder and decoder framework for semantic segmentation, which performs pixel-wise classification to accurately detect cracks. The main idea is to capture the global context of a scene and determine whether cracks are in the image while also providing a reduced and essential picture of the crack locations. The visual geometry group network (VGGNet), a variant of the DCCN, is employed as a backbone in the proposed FCN for end-to-end training. The efficacy of the proposed FCN method is tested on a publicly available benchmark dataset of concrete crack images. The experimental results indicate that the proposed method is highly effective for concrete crack classification, obtaining scores of approximately 92% for both the recall and F1 average.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1402
Author(s):  
Taehee Lee ◽  
Yeohwan Yoon ◽  
Chanjun Chun ◽  
Seungki Ryu

Poor road-surface conditions pose a significant safety risk to vehicle operation, especially in the case of autonomous vehicles. Hence, maintenance of road surfaces will become even more important in the future. With the development of deep learning-based computer image processing technology, artificial intelligence models that evaluate road conditions are being actively researched. However, as the lighting conditions of the road surface vary depending on the weather, the model performance may degrade for an image whose brightness falls outside the range of the learned image, even for the same road. In this study, a semantic segmentation model with an autoencoder structure was developed for detecting road surface along with a CNN-based image preprocessing model. This setup ensures better road-surface crack detection by adjusting the image brightness before it is input into the road-crack detection model. When the preprocessing model was applied, the road-crack segmentation model exhibited consistent performance even under varying brightness values.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1581
Author(s):  
Xiaolong Chen ◽  
Jian Li ◽  
Shuowen Huang ◽  
Hao Cui ◽  
Peirong Liu ◽  
...  

Cracks are one of the main distresses that occur on concrete surfaces. Traditional methods for detecting cracks based on two-dimensional (2D) images can be hampered by stains, shadows, and other artifacts, while various three-dimensional (3D) crack-detection techniques, using point clouds, are less affected in this regard but are limited by the measurement accuracy of the 3D laser scanner. In this study, we propose an automatic crack-detection method that fuses 3D point clouds and 2D images based on an improved Otsu algorithm, which consists of the following four major procedures. First, a high-precision registration of a depth image projected from 3D point clouds and 2D images is performed. Second, pixel-level image fusion is performed, which fuses the depth and gray information. Third, a rough crack image is obtained from the fusion image using the improved Otsu method. Finally, the connected domain labeling and morphological methods are used to finely extract the cracks. Experimentally, the proposed method was tested at multiple scales and with various types of concrete crack. The results demonstrate that the proposed method can achieve an average precision of 89.0%, recall of 84.8%, and F1 score of 86.7%, performing significantly better than the single image (average F1 score of 67.6%) and single point cloud (average F1 score of 76.0%) methods. Accordingly, the proposed method has high detection accuracy and universality, indicating its wide potential application as an automatic method for concrete-crack detection.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4980
Author(s):  
Tung-Ching Su

The techniques of concrete crack detection, as well as assessments based on thermography coupled with ultrasound, have been presented in many works; however, they have generally needed an additional source of thermal infrared (TIR) radiance and have only been applied in laboratories. Considering the accessibility of thermal infrared cameras, a TIR camera (NEC F30W) was employed to detect cracking in the concrete wall of an historic house with a western architectural style in Kinmen, Taiwan, based on the TIR radiances of cracking. An operation procedure involving a series of image processing and statistical analysis processes was designed to evaluate the performance of the TIR camera in the assessment of the cracking width. This procedure using multiple measurements was implemented from March to August 2019, and the t-tests indicated that the temperature differences between the inside and outline of the concrete cracks remained insignificant as the temperature or relative humidity (RH) in the subtropical climate rose. The experimental results of the operation procedure indicated that the maximum focusing range, which is related to the size of the sensor array, and the minimum detectable crack width of a TIR camera should be 1.0 m and 6.0 mm, respectively, in order to derive a linear regression model with a determination coefficient R2 of 0.733 to estimate the cracking widths, based on the temperature gradients. The validation results showed that there was an approximate R2 value of 0.8 and a total root mean square error of ±2.5 mm between the cracking width estimations and the observations.


Sign in / Sign up

Export Citation Format

Share Document