Deep learning–based autonomous concrete crack evaluation through hybrid image scanning

2019 ◽  
Vol 18 (5-6) ◽  
pp. 1722-1737 ◽  
Author(s):  
Keunyoung Jang ◽  
Namgyu Kim ◽  
Yun-Kyu An

This article proposes a deep learning–based autonomous concrete crack detection technique using hybrid images. The hybrid images combining vision and infrared thermography images are able to improve crack detectability while minimizing false alarms. In particular, large-scale concrete-made infrastructures such as bridge and dam can be effectively inspected by spatially scanning the unmanned vehicle–mounted hybrid imaging system including a vision camera, an infrared camera, and a continuous-wave line laser. However, the expert-dependent decision-making for crack identification which has been widely used in industrial fields is often cumbersome, time-consuming, and unreliable. As a target concrete structure gets larger, automated decision-making becomes more desirable from the practical point of view. The proposed technique is able to achieve automated crack identification and visualization by transfer learning of a well-trained deep convolutional neural network, that is, GoogLeNet, while retaining the advantages of the hybrid images. The proposed technique is experimentally validated using a lab-scale concrete specimen with cracks of various sizes. The test results reveal that macro- and microcracks are automatically visualized while minimizing false alarms.

Author(s):  
Limu Chen ◽  
Ye Xia ◽  
Dexiong Pan ◽  
Chengbin Wang

<p>Deep-learning based navigational object detection is discussed with respect to active monitoring system for anti-collision between vessel and bridge. Motion based object detection method widely used in existing anti-collision monitoring systems is incompetent in dealing with complicated and changeable waterway for its limitations in accuracy, robustness and efficiency. The video surveillance system proposed contains six modules, including image acquisition, detection, tracking, prediction, risk evaluation and decision-making, and the detection module is discussed in detail. A vessel-exclusive dataset with tons of image samples is established for neural network training and a SSD (Single Shot MultiBox Detector) based object detection model with both universality and pertinence is generated attributing to tactics of sample filtering, data augmentation and large-scale optimization, which make it capable of stable and intelligent vessel detection. Comparison results with conventional methods indicate that the proposed deep-learning method shows remarkable advantages in robustness, accuracy, efficiency and intelligence. In-situ test is carried out at Songpu Bridge in Shanghai, and the results illustrate that the method is qualified for long-term monitoring and providing information support for further analysis and decision making.</p>


2020 ◽  
pp. 147592172094006
Author(s):  
Lingxin Zhang ◽  
Junkai Shen ◽  
Baijie Zhu

Crack is an important indicator for evaluating the damage level of concrete structures. However, traditional crack detection algorithms have complex implementation and weak generalization. The existing crack detection algorithms based on deep learning are mostly window-level algorithms with low pixel precision. In this article, the CrackUnet model based on deep learning is proposed to solve the above problems. First, crack images collected from the lab, earthquake sites, and the Internet are resized, labeled manually, and augmented to make a dataset (1200 subimages with 256 × 256 × 3 resolutions in total). Then, an improved Unet-based method called CrackUnet is proposed for automated pixel-level crack detection. A new loss function named generalized dice loss is adopted to detect cracks more accurately. How the size of the dataset and the depth of the model affect the training time, detecting accuracy, and speed is researched. The proposed methods are evaluated on the test dataset and a previously published dataset. The highest results can reach 91.45%, 88.67%, and 90.04% on test dataset and 98.72%, 92.84%, and 95.44% on CrackForest Dataset for precision, recall, and F1 score, respectively. By comparing the detecting accuracy, the training time, and the information of datasets, CrackUnet model outperform than other methods. Furthermore, six images with complicated noise are used to investigate the robustness and generalization of CrackUnet models.


2019 ◽  
Vol 10 (2) ◽  
pp. 226-250 ◽  
Author(s):  
Tony Cox

AbstractManaging large-scale, geographically distributed, and long-term risks arising from diverse underlying causes – ranging from poverty to underinvestment in protecting against natural hazards or failures of sociotechnical, economic, and financial systems – poses formidable challenges for any theory of effective social decision-making. Participants may have different and rapidly evolving local information and goals, perceive different opportunities and urgencies for actions, and be differently aware of how their actions affect each other through side effects and externalities. Six decades ago, political economist Charles Lindblom viewed “rational-comprehensive decision-making” as utterly impracticable for such realistically complex situations. Instead, he advocated incremental learning and improvement, or “muddling through,” as both a positive and a normative theory of bureaucratic decision-making when costs and benefits are highly uncertain. But sparse, delayed, uncertain, and incomplete feedback undermines the effectiveness of collective learning while muddling through, even if all participant incentives are aligned; it is no panacea. We consider how recent insights from machine learning – especially, deep multiagent reinforcement learning – formalize aspects of muddling through and suggest principles for improving human organizational decision-making. Deep learning principles adapted for human use can not only help participants in different levels of government or control hierarchies manage some large-scale distributed risks, but also show how rational-comprehensive decision analysis and incremental learning and improvement can be reconciled and synthesized.


2018 ◽  
Author(s):  
Anisha Keshavan ◽  
Jason D. Yeatman ◽  
Ariel Rokem

AbstractResearch in many fields has become increasingly reliant on large and complex datasets. “Big Data” holds untold promise to rapidly advance science by tackling new questions that cannot be answered with smaller datasets. While powerful, research with Big Data poses unique challenges, as many standard lab protocols rely on experts examining each one of the samples. This is not feasible for large-scale datasets because manual approaches are time-consuming and hence difficult to scale. Meanwhile, automated approaches lack the accuracy of examination by highly trained scientists and this may introduce major errors, sources of noise, and unforeseen biases into these large and complex datasets. Our proposed solution is to 1) start with a small, expertly labelled dataset, 2) amplify labels through web-based tools that engage citizen scientists, and 3) train machine learning on amplified labels to emulate expert decision making. As a proof of concept, we developed a system to quality control a large dataset of three-dimensional magnetic resonance images (MRI) of human brains. An initial dataset of 200 brain images labeled by experts were amplified by citizen scientists to label 722 brains, with over 80,000 ratings done through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on a combination of the citizen scientist labels that accounts for differences in the quality of classification by different citizen scientists. In an ROC analysis (on left out test data), the deep learning network performed as well as a state-of-the-art, specialized algorithm (MRIQC) for quality control of T1-weighted images, each with an area under the curve of 0.99. Finally, as a specific practical application of the method, we explore how brain image quality relates to the replicability of a well established relationship between brain volume and age over development. Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in emerging disciplines where specialized, automated tools do not already exist.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4945 ◽  
Author(s):  
Xiangyang Xu ◽  
Hao Yang

The health monitoring of tunnel structures is vital to the safe operation of railway transportation systems. With the increasing mileage of tunnels, regular inspection and health monitoring are urgently demanded for the tunnel structures, especially for information regarding deformation and damage. However, traditional methods of tunnel inspection are time-consuming, expensive and highly dependent on human subjectivity. In this paper, an automatic tunnel monitoring method is investigated based on image data which is collected through the moving vision measurement unit consisting of camera array. Furthermore, geometric modelling and crack inspection algorithms are proposed where a robust three-dimensional tunnel model is reconstructed utilizing a B-spline method and crack identification is conducted by means of a Mask R-CNN network. The innovation of this investigation is that we combine the robust modelling which could be applied for the deformation analysis and the crack detection where a deep learning method is employed to recognize the tunnel cracks intelligently based on image sensors. In this study, experiments were conducted on a subway tunnel structure several kilometers long, and a robust three-dimensional model is generated and the cracks are identified automatically with the image data. The superiority of this proposal is that the comprehensive information of geometry deformation and crack damage can ensure the reliability and improve the accuracy of health monitoring.


2020 ◽  
Vol 12 (18) ◽  
pp. 2997 ◽  
Author(s):  
Tianwen Zhang ◽  
Xiaoling Zhang ◽  
Xiao Ke ◽  
Xu Zhan ◽  
Jun Shi ◽  
...  

Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, there is still a lack of a reliable deep learning SAR ship detection dataset that can meet the practical migration application of ship detection in large-scene space-borne SAR images. Thus, to solve this problem, this paper releases a Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) from Sentinel-1, for small ship detection under large-scale backgrounds. LS-SSDD-v1.0 contains 15 large-scale SAR images whose ground truths are correctly labeled by SAR experts by drawing support from the Automatic Identification System (AIS) and Google Earth. To facilitate network training, the large-scale images are directly cut into 9000 sub-images without bells and whistles, providing convenience for subsequent detection result presentation in large-scale SAR images. Notably, LS-SSDD-v1.0 has five advantages: (1) large-scale backgrounds, (2) small ship detection, (3) abundant pure backgrounds, (4) fully automatic detection flow, and (5) numerous and standardized research baselines. Last but not least, combined with the advantage of abundant pure backgrounds, we also propose a Pure Background Hybrid Training mechanism (PBHT-mechanism) to suppress false alarms of land in large-scale SAR images. Experimental results of ablation study can verify the effectiveness of the PBHT-mechanism. LS-SSDD-v1.0 can inspire related scholars to make extensive research into SAR ship detection methods with engineering application value, which is conducive to the progress of SAR intelligent interpretation technology.


2022 ◽  
Vol 4 ◽  
Author(s):  
Lasitha Vidyaratne ◽  
Adam Carpenter ◽  
Tom Powers ◽  
Chris Tennant ◽  
Khan M. Iftekharuddin ◽  
...  

This work investigates the efficacy of deep learning (DL) for classifying C100 superconducting radio-frequency (SRF) cavity faults in the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. CEBAF is a large, high-power continuous wave recirculating linac that utilizes 418 SRF cavities to accelerate electrons up to 12 GeV. Recent upgrades to CEBAF include installation of 11 new cryomodules (88 cavities) equipped with a low-level RF system that records RF time-series data from each cavity at the onset of an RF failure. Typically, subject matter experts (SME) analyze this data to determine the fault type and identify the cavity of origin. This information is subsequently utilized to identify failure trends and to implement corrective measures on the offending cavity. Manual inspection of large-scale, time-series data, generated by frequent system failures is tedious and time consuming, and thereby motivates the use of machine learning (ML) to automate the task. This study extends work on a previously developed system based on traditional ML methods (Tennant and Carpenter and Powers and Shabalina Solopova and Vidyaratne and Iftekharuddin, Phys. Rev. Accel. Beams, 2020, 23, 114601), and investigates the effectiveness of deep learning approaches. The transition to a DL model is driven by the goal of developing a system with sufficiently fast inference that it could be used to predict a fault event and take actionable information before the onset (on the order of a few hundred milliseconds). Because features are learned, rather than explicitly computed, DL offers a potential advantage over traditional ML. Specifically, two seminal DL architecture types are explored: deep recurrent neural networks (RNN) and deep convolutional neural networks (CNN). We provide a detailed analysis on the performance of individual models using an RF waveform dataset built from past operational runs of CEBAF. In particular, the performance of RNN models incorporating long short-term memory (LSTM) are analyzed along with the CNN performance. Furthermore, comparing these DL models with a state-of-the-art fault ML model shows that DL architectures obtain similar performance for cavity identification, do not perform quite as well for fault classification, but provide an advantage in inference speed.


2021 ◽  
Author(s):  
Bin Wu ◽  
Yuhong Fan ◽  
Li Mao

Abstract For the uncertainty and complexity in object decision making and the differences of decision makers ' reliabilities, an object decision making method based on deep learning theory is proposed. However, traditional deep learning approaches optimize the parameters in an "end-to-end" mode by annotating large amounts of data to propagate the errors backwards. The learning method could be considered to be as a "black box", which is weak in explainability. Explainability refers to an algorithm that gives a clear summary of a particular task and connects it to defined principles or principles in the human world. This paper proposes an explainable attention model consisting of channel attention module and spatial attention module. The proposed module derives attention graphs from channel dimension and spatial dimension respectively, then the input features are selectively learned according to the importance of the features. For different channels, the higher the weight, the higher the correlation which required more attention. The main function of spatial attention is to capture the most informative part in the local feature graph, which is a supplement to channel attention. We evaluate our proposed module based on the ImageNet-1K and Cifar-100 respectively. Experimental results show that our algorithm is superior in both accuracy and robustness compared with the state of the arts.


Sign in / Sign up

Export Citation Format

Share Document