scholarly journals Applying Deep Learning to Continuous Bridge Deflection Detected by Fiber Optic Gyroscope for Damage Detection

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 911 ◽  
Author(s):  
Sheng Li ◽  
Xiang Zuo ◽  
Zhengying Li ◽  
Honghai Wang

Improving the accuracy and efficiency of bridge structure damage detection is one of the main challenges in engineering practice. This paper aims to address this issue by monitoring the continuous bridge deflection based on the fiber optic gyroscope and applying the deep-learning algorithm to perform structural damage detection. With a scale-down bridge model, three types of damage scenarios and an intact benchmark were simulated. A supervised learning model based on the deep convolutional neural networks was proposed. After the training process under ten-fold cross-validation, the model accuracy can reach 96.9% and significantly outperform that of other four traditional machine learning methods (random forest, support vector machine, k-nearest neighbor, and decision tree) used for comparison. Further, the proposed model illustrated its decent ability in distinguishing damage from structurally symmetrical locations.

2020 ◽  
pp. 147592172093405
Author(s):  
Zilong Wang ◽  
Young-Jin Cha

This article proposes an unsupervised deep learning–based approach to detect structural damage. Supervised deep learning methods have been proposed in recent years, but they require data from an intact structure and various damage scenarios of monitored structures for their training processes. However, the labeling work on the training data is typically time-consuming and costly, and sometimes collecting sufficient training data from various damage scenarios of infrastructures in service is impractical. In this article, the proposed unsupervised deep learning method based on a deep auto-encoder with an one-class support vector machine only uses the measured acceleration response data acquired from intact or baseline structures as training data, which enables future structural damage to be detected. The major contributions and novelties of the proposed method are as follows. First, an appropriate deep auto-encoder is carefully designed through comparative studies on the depth of neural networks. Second, the designed deep auto-encoder is taken as an extractor to obtain damage-sensitive features from the measured acceleration response data, and an one-class support vector machine is used as a damage detector. Third, experimental and numerical studies validate the high accuracy of the proposed method for damage detection: a 97.4% mean average for a 12-story numerical building model and a 91.0% accuracy for a laboratory-scaled steel bridge. Fourth, the proposed method also detects light damage (i.e. a 10% reduction in stiffness) with 96.9% to 99.0% accuracy, which shows its superior performance compared with the current state of the art. Fifth, it provides stable and more robust damage detection performance with reduced tuning parameters.


Author(s):  
Dang Viet Hung ◽  
Ha Manh Hung ◽  
Pham Hoang Anh ◽  
Nguyen Truong Thang

Timely monitoring the large-scale civil structure is a tedious task demanding expert experience and significant economic resources. Towards a smart monitoring system, this study proposes a hybrid deep learning algorithm aiming for structural damage detection tasks, which not only reduces required resources, including computational complexity, data storage but also has the capability to deal with different damage levels. The technique combines the ability to capture local connectivity of Convolution Neural Network and the well-known performance in accounting for long-term dependencies of Long-Short Term Memory network, into a single end-to-end architecture using directly raw acceleration time-series without requiring any signal preprocessing step. The proposed approach is applied to a series of experimentally measured vibration data from a three-story frame and successful in providing accurate damage identification results. Furthermore, parametric studies are carried out to demonstrate the robustness of this hybrid deep learning method when facing data corrupted by random noises, which is unavoidable in reality. Keywords: structural damage detection; deep learning algorithm; vibration; sensor; signal processing.


2021 ◽  
Author(s):  
Karthik Gopalakrishnan ◽  
V. John Mathews

Abstract Machine learning based health monitoring techniques for damage detection have been widely studied. Most such approaches suffer from two main problems, time-varying environmental and operating conditions, and the difficulty in acquiring training data from damaged structures. Recently, our group presented an unsupervised learning algorithm using support vector data description (SVDD) and an autoencoder to detect damage in time-varying environments without training on data from damaged structures. Though the preliminary experiments produced promising results, the algorithm was computationally expensive. This paper presents an iterative algorithm that learns the state of a structure in time-varying environments online in a computationally efficient manner. This algorithm combines the fast, incremental SVDD (FISVDD) algorithm with signal features based on wavelet packet decomposition (WPD) to create a method that is efficient and provides more accurate detection of smaller damage than the autoencoder-based method. The use of FISVDD has created the possibility of online learning and adaptive damage detection in time-varying environmental and operating conditions (EOC). The WPD-based features also have the potential to provide explainability for the learning algorithm.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Tianqi Tu ◽  
Xueling Wei ◽  
Yue Yang ◽  
Nianrong Zhang ◽  
Wei Li ◽  
...  

Abstract Background Common subtypes seen in Chinese patients with membranous nephropathy (MN) include idiopathic membranous nephropathy (IMN) and hepatitis B virus-related membranous nephropathy (HBV-MN). However, the morphologic differences are not visible under the light microscope in certain renal biopsy tissues. Methods We propose here a deep learning-based framework for processing hyperspectral images of renal biopsy tissue to define the difference between IMN and HBV-MN based on the component of their immune complex deposition. Results The proposed framework can achieve an overall accuracy of 95.04% in classification, which also leads to better performance than support vector machine (SVM)-based algorithms. Conclusion IMN and HBV-MN can be correctly separated via the deep learning framework using hyperspectral imagery. Our results suggest the potential of the deep learning algorithm as a new method to aid in the diagnosis of MN.


2020 ◽  
Author(s):  
Zongchen Li ◽  
Wenzhuo Zhang ◽  
Guoxiong Zhou

Abstract Aiming at the difficult problem of complex extraction for tree image in the existing complex background, we took tree species as the research object and proposed a fast recognition system solution for tree image based on Caffe platform and deep learning. In the research of deep learning algorithm based on Caffe framework, the improved Dual-Task CNN model (DCNN) is applied to train the image extractor and classifier to accomplish the dual tasks of image cleaning and tree classification. In addition, when compared with the traditional classification methods represented by Support Vector Machine (SVM) and Single-Task CNN model, Dual-Task CNN model demonstrates its superiority in classification performance. Then, in order for further improvement to the recognition accuracy for similar species, Gabor kernel was introduced to extract the features of frequency domain for images in different scales and directions, so as to enhance the texture features of leaf images and improve the recognition effect. The improved model was tested on the data sets of similar species. As demonstrated by the results, the improved deep Gabor convolutional neural network (GCNN) is advantageous in tree recognition and similar tree classification when compared with the Dual-Task CNN classification method. Finally, the recognition results of trees can be displayed on the application graphical interface as well. In the application graphical interface designed based on Ubantu system, it is capable to perform such functions as quick reading of and search for picture files, snapshot, one-key recognition, one-key e


2020 ◽  
pp. 35
Author(s):  
M. Campos-Taberner ◽  
F.J. García-Haro ◽  
B. Martínez ◽  
M.A. Gilabert

<p class="p1">The use of deep learning techniques for remote sensing applications has recently increased. These algorithms have proven to be successful in estimation of parameters and classification of images. However, little effort has been made to make them understandable, leading to their implementation as “black boxes”. This work aims to evaluate the performance and clarify the operation of a deep learning algorithm, based on a bi-directional recurrent network of long short-term memory (2-BiLSTM). The land use classification in the Valencian Community based on Sentinel-2 image time series in the framework of the common agricultural policy (CAP) is used as an example. It is verified that the accuracy of the deep learning techniques is superior (98.6 % overall success) to that other algorithms such as decision trees (DT), k-nearest neighbors (k-NN), neural networks (NN), support vector machines (SVM) and random forests (RF). The performance of the classifier has been studied as a function of time and of the predictors used. It is concluded that, in the study area, the most relevant information used by the network in the classification are the images corresponding to summer and the spectral and spatial information derived from the red and near infrared bands. These results open the door to new studies in the field of the explainable deep learning in remote sensing applications.</p>


Author(s):  
N. Kerle ◽  
F. Nex ◽  
D. Duarte ◽  
A. Vetrivel

<p><strong>Abstract.</strong> Structural disaster damage detection and characterisation is one of the oldest remote sensing challenges, and the utility of virtually every type of active and passive sensor deployed on various air- and spaceborne platforms has been assessed. The proliferation and growing sophistication of UAV in recent years has opened up many new opportunities for damage mapping, due to the high spatial resolution, the resulting stereo images and derivatives, and the flexibility of the platform. We have addressed the problem in the context of two European research projects, RECONASS and INACHUS. In this paper we synthesize and evaluate the progress of 6 years of research focused on advanced image analysis that was driven by progress in computer vision, photogrammetry and machine learning, but also by constraints imposed by the needs of first responder and other civil protection end users. The projects focused on damage to individual buildings caused by seismic activity but also explosions, and our work centred on the processing of 3D point cloud information acquired from stereo imagery. Initially focusing on the development of both supervised and unsupervised damage detection methods built on advanced texture features and basic classifiers such as Support Vector Machine and Random Forest, the work moved on to the use of deep learning. In particular the coupling of image-derived features and 3D point cloud information in a Convolutional Neural Network (CNN) proved successful in detecting also subtle damage features. In addition to the detection of standard rubble and debris, CNN-based methods were developed to detect typical façade damage indicators, such as cracks and spalling, including with a focus on multi-temporal and multi-scale feature fusion. We further developed a processing pipeline and mobile app to facilitate near-real time damage mapping. The solutions were tested in a number of pilot experiments and evaluated by a variety of stakeholders.</p>


2021 ◽  
Vol 5 (11) ◽  
pp. 303
Author(s):  
Kian K. Sepahvand

Damage detection, using vibrational properties, such as eigenfrequencies, is an efficient and straightforward method for detecting damage in structures, components, and machines. The method, however, is very inefficient when the values of the natural frequencies of damaged and undamaged specimens exhibit slight differences. This is particularly the case with lightweight structures, such as fiber-reinforced composites. The nonlinear support vector machine (SVM) provides enhanced results under such conditions by transforming the original features into a new space or applying a kernel trick. In this work, the natural frequencies of damaged and undamaged components are used for classification, employing the nonlinear SVM. The proposed methodology assumes that the frequencies are identified sequentially from an experimental modal analysis; for the study propose, however, the training data are generated from the FEM simulations for damaged and undamaged samples. It is shown that nonlinear SVM using kernel function yields in a clear classification boundary between damaged and undamaged specimens, even for minor variations in natural frequencies.


GEOMATICA ◽  
2021 ◽  
pp. 1-23
Author(s):  
Roholah Yazdan ◽  
Masood Varshosaz ◽  
Saied Pirasteh ◽  
Fabio Remondino

Automatic detection and recognition of traffic signs from images is an important topic in many applications. At first, we segmented the images using a classification algorithm to delineate the areas where the signs are more likely to be found. In this regard, shadows, objects having similar colours, and extreme illumination changes can significantly affect the segmentation results. We propose a new shape-based algorithm to improve the accuracy of the segmentation. The algorithm works by incorporating the sign geometry to filter out the wrong pixels from the classification results. We performed several tests to compare the performance of our algorithm against those obtained by popular techniques such as Support Vector Machine (SVM), K-Means, and K-Nearest Neighbours. In these tests, to overcome the unwanted illumination effects, the images are transformed into colour spaces Hue, Saturation, and Intensity, YUV, normalized red green blue, and Gaussian. Among the traditional techniques used in this study, the best results were obtained with SVM applied to the images transformed into the Gaussian colour space. The comparison results also suggested that by adding the geometric constraints proposed in this study, the quality of sign image segmentation is improved by 10%–25%. We also comparted the SVM classifier enhanced by incorporating the geometry of signs with a U-Shaped deep learning algorithm. Results suggested the performance of both techniques is very close. Perhaps the deep learning results could be improved if a more comprehensive data set is provided.


Sign in / Sign up

Export Citation Format

Share Document