scholarly journals The Teager-Kaiser Energy Cepstral Coefficients as an Effective Structural Health Monitoring Tool

2019 ◽  
Vol 9 (23) ◽  
pp. 5064 ◽  
Author(s):  
Marco Civera ◽  
Matteo Ferraris ◽  
Rosario Ceravolo ◽  
Cecilia Surace ◽  
Raimondo Betti

Recently, features and techniques from speech processing have started to gain increasing attention in the Structural Health Monitoring (SHM) community, in the context of vibration analysis. In particular, the Cepstral Coefficients (CCs) proved to be apt in discerning the response of a damaged structure with respect to a given undamaged baseline. Previous works relied on the Mel-Frequency Cepstral Coefficients (MFCCs). This approach, while efficient and still very common in applications, such as speech and speaker recognition, has been followed by other more advanced and competitive techniques for the same aims. The Teager-Kaiser Energy Cepstral Coefficients (TECCs) is one of these alternatives. These features are very closely related to MFCCs, but provide interesting and useful additional values, such as e.g., improved robustness with respect to noise. The goal of this paper is to introduce the use of TECCs for damage detection purposes, by highlighting their competitiveness with closely related features. Promising results from both numerical and experimental data were obtained.

2020 ◽  
pp. 147592172092155 ◽  
Author(s):  
Mattia Francesco Bado ◽  
Joan Ramon Casas ◽  
Judit Gómez

Distributed optical fiber sensors are measuring tools whose potential related to the civil engineering field has been discovered in the latest years only (reduced dimensions, easy installation process, lower installation costs, elevated reading accuracy, and distributed monitoring). Yet, what appears clear from numerous in situ distributed optical fiber sensors monitoring campaigns (bridges and historical structures among others) and laboratory confined experiments is that optical fiber sensors monitorings have a tendency of including in their outputs a certain amount of anomalistic readings (out of scale and unreliable measurements). These can be both punctual in nature and spread over all the monitoring duration. Their presence strongly affects the results both altering the data in its affected sections and distorting the overall trend of the strain evolution profiles, thus the importance of detecting, eliminating, and substituting them with correct values. Being this issue intrinsic in the raw output data of the monitoring tool itself, its only solution is computer-aided post-processing of the strain data. This article discusses different simple algorithms for getting rid of such disruptive anomalies using two methods previously used in the literature and a novel polynomial-based one with different levels of sophistication and accuracy. The viability and performance of each are tested on two study case scenarios: an experimental laboratory test on two reinforced concrete tensile elements and an in situ tunnel monitoring campaign. The outcome of such analysis will provide the reader with both clear indications on how to purge a distributed optical fiber sensors-extracted data set of all anomalies and on which is the best-suited method according to their needs. This marriage of computer technology and cutting edge structural health monitoring tool not only elevates the distributed optical fiber sensors viability but also provides civil and infrastructures engineers a reliable tool to perform previously unreachable levels of accuracy and extension monitoring coverage.


2020 ◽  
Vol 12 (1) ◽  
pp. 9
Author(s):  
Ali Ozdagli ◽  
Xenofon Koutsoukos

In recent years, machine learning (ML) algorithms gained a lot of interest within structural health monitoring (SHM) community. Many of those approaches assume the training and test data come from similar distributions. However, real-world applications, where an ML model is trained on numerical simulation data and tested on experimental data, are deemed to fail in detecting the damage, as both domain data are collected under different conditions and they don’t share the same underlying features. This paper proposes the domain adaptation approach as a solution to particular SHM problems where the classifier has access to the labeled training (source) and unlabeled test (target) domains. The proposed domain adaptation method forms a feature space to match the latent features of both source and target domains. To evaluate the performance of this approach, we present a case study where we train three neural network-based classifiers on a three-story test structure: i) Classifier A uses labeled simulation data from the numerical model of the test structure; ii) Classifier B utilizes labeled experimental data from the test structure; and iii) Classifier C implements domain adaptation by training on labeled simulation data (source) and unlabeled experimental data (target). The performance of each classifier is evaluated by computing the accuracy of the discrimination against labeled experimental data. Overall, the results demonstrate that domain adaption can be regarded as a valid approach for SHM applications where access to labeled experimental data is limited.


Sign in / Sign up

Export Citation Format

Share Document