scholarly journals Health Monitoring of Large-Scale Civil Structures: An Approach Based on Data Partitioning and Classical Multidimensional Scaling

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1646
Author(s):  
Alireza Entezami ◽  
Hassan Sarmadi ◽  
Behshid Behkamal ◽  
Stefano Mariani

A major challenge in structural health monitoring (SHM) is the efficient handling of big data, namely of high-dimensional datasets, when damage detection under environmental variability is being assessed. To address this issue, a novel data-driven approach to early damage detection is proposed here. The approach is based on an efficient partitioning of the dataset, gathering the sensor recordings, and on classical multidimensional scaling (CMDS). The partitioning procedure aims at moving towards a low-dimensional feature space; the CMDS algorithm is instead exploited to set the coordinates in the mentioned low-dimensional space, and define damage indices through norms of the said coordinates. The proposed approach is shown to efficiently and robustly address the challenges linked to high-dimensional datasets and environmental variability. Results related to two large-scale test cases are reported: the ASCE structure, and the Z24 bridge. A high sensitivity to damage and a limited (if any) number of false alarms and false detections are reported, testifying the efficacy of the proposed data-driven approach.

2021 ◽  
pp. 147592172097395
Author(s):  
Alireza Entezami ◽  
Hassan Sarmadi ◽  
Masoud Salar ◽  
Carlo De Michele ◽  
Ali Nadir Arslan

Dealing with the problem of large volumes of high-dimensional features and detecting damage under ambient vibration are critical to structural health monitoring. To address these challenges, this article proposes a novel data-driven method for early damage detection of civil engineering structures by robust multidimensional scaling. The proposed method consists of some simple but effective computational parts including a segmentation process, a pairwise distance calculation, an iterative algorithm regarding robust multidimensional scaling, a matrix vectorization procedure, and a Euclidean norm computation. AutoRegressive Moving Average models are fitted to vibration time-domain responses caused by ambient excitations to extract the model residuals as high-dimensional features. In order to increase the reliability of damage detection and avoid any false alarm, the extreme value theory is considered to determine a reliable threshold limit. However, the selection of an appropriate extreme value distribution is crucial and troublesome. To cope with this limitation, this article introduces the generalized extreme value distribution and its shape parameter for choosing the best extreme value model among Gumbel, Fréchet, and Weibull distributions. The main contributions of this article include developing a novel data-driven strategy for early damage detection and addressing the limitation of using high-dimensional features. Experimental data sets of two well-known civil structures are utilized to validate the proposed method along with some comparative studies. Results demonstrate that the proposed data-driven method in conjunction with the extreme value theory is highly able to detect damage under ambient vibration and high-dimensional features.


2012 ◽  
Author(s):  
Michael Ghil ◽  
Mickael D. Chekroun ◽  
Dmitri Kondrashov ◽  
Michael K. Tippett ◽  
Andrew Robertson ◽  
...  

Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2021 ◽  
Vol 10 (1) ◽  
pp. e001087
Author(s):  
Tarek F Radwan ◽  
Yvette Agyako ◽  
Alireza Ettefaghian ◽  
Tahira Kamran ◽  
Omar Din ◽  
...  

A quality improvement (QI) scheme was launched in 2017, covering a large group of 25 general practices working with a deprived registered population. The aim was to improve the measurable quality of care in a population where type 2 diabetes (T2D) care had previously proved challenging. A complex set of QI interventions were co-designed by a team of primary care clinicians and educationalists and managers. These interventions included organisation-wide goal setting, using a data-driven approach, ensuring staff engagement, implementing an educational programme for pharmacists, facilitating web-based QI learning at-scale and using methods which ensured sustainability. This programme was used to optimise the management of T2D through improving the eight care processes and three treatment targets which form part of the annual national diabetes audit for patients with T2D. With the implemented improvement interventions, there was significant improvement in all care processes and all treatment targets for patients with diabetes. Achievement of all the eight care processes improved by 46.0% (p<0.001) while achievement of all three treatment targets improved by 13.5% (p<0.001). The QI programme provides an example of a data-driven large-scale multicomponent intervention delivered in primary care in ethnically diverse and socially deprived areas.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 452
Author(s):  
Qun Yang ◽  
Dejian Shen

Natural hazards have caused damages to structures and economic losses worldwide. Post-hazard responses require accurate and fast damage detection and assessment. In many studies, the development of data-driven damage detection within the research community of structural health monitoring has emerged due to the advances in deep learning models. Most data-driven models for damage detection focus on classifying different damage states and hence damage states cannot be effectively quantified. To address such a deficiency in data-driven damage detection, we propose a sequence-to-sequence (Seq2Seq) model to quantify a probability of damage. The model was trained to learn damage representations with only undamaged signals and then quantify the probability of damage by feeding damaged signals into models. We tested the validity of our proposed Seq2Seq model with a signal dataset which was collected from a two-story timber building subjected to shake table tests. Our results show that our Seq2Seq model has a strong capability of distinguishing damage representations and quantifying the probability of damage in terms of highlighting the regions of interest.


PLoS Genetics ◽  
2021 ◽  
Vol 17 (1) ◽  
pp. e1009315
Author(s):  
Ardalan Naseri ◽  
Junjie Shi ◽  
Xihong Lin ◽  
Shaojie Zhang ◽  
Degui Zhi

Inference of relationships from whole-genome genetic data of a cohort is a crucial prerequisite for genome-wide association studies. Typically, relationships are inferred by computing the kinship coefficients (ϕ) and the genome-wide probability of zero IBD sharing (π0) among all pairs of individuals. Current leading methods are based on pairwise comparisons, which may not scale up to very large cohorts (e.g., sample size >1 million). Here, we propose an efficient relationship inference method, RAFFI. RAFFI leverages the efficient RaPID method to call IBD segments first, then estimate the ϕ and π0 from detected IBD segments. This inference is achieved by a data-driven approach that adjusts the estimation based on phasing quality and genotyping quality. Using simulations, we showed that RAFFI is robust against phasing/genotyping errors, admix events, and varying marker densities, and achieves higher accuracy compared to KING, the current leading method, especially for more distant relatives. When applied to the phased UK Biobank data with ~500K individuals, RAFFI is approximately 18 times faster than KING. We expect RAFFI will offer fast and accurate relatedness inference for even larger cohorts.


Stat ◽  
2016 ◽  
Vol 5 (1) ◽  
pp. 200-212 ◽  
Author(s):  
Hyokyoung G. Hong ◽  
Lan Wang ◽  
Xuming He

Sign in / Sign up

Export Citation Format

Share Document