scholarly journals Feature Selecting Hierarchical Neural Network for Industrial System Health Monitoring

2018 ◽  
Vol 10 (1) ◽  
Author(s):  
Gabriel Michau ◽  
Manuel Arias Chao ◽  
Olga Fink

Industrial System Health Monitoring relies usually on the monitoring of well-designed features. This requires both, the engineering of reliable features and a good methodology for their analysis. If traditionally, features were engineered based on the physics of the system, recent advances in machine learning demonstrated that features could be automatically learned and monitored. In particular, using Hierarchical Extreme Learning Machines (HELM), based on random features, very good results have already been achieved for health monitoring with training on healthy data only. Yet, although very useful and mathematically sound, random features have little popularity as they contradict the intuition and seem to rely on luck. This tends to increase the “blackbox” effect often associated with Machine Learning. To mitigate this, in this paper, we propose to modify the traditional HELM architecture such that, while still relying on random features, only the most useful features among a large population will be selected. Traditional HELM are made of stacked contractive autoencoders with `1- or `2-regularisation and of a classifier as last layer. To achieve our objective, we propose to opt for expanding auto-encoders instead, but trained with a strong Group-LASSO regularization. This Group-LASSO regularisation fosters the selection of as few features as possible, making the auto-encoder in reality (or in testing condition) contractive. This deterministic selection provides useful features for health monitoring, without the need of learning or manually engineering them. The proposed approach demonstrates a better performance for fault detection and isolation on case studies developed for HELM evaluation.

Author(s):  
B. A. Dattaram ◽  
N. Madhusudanan

Flight delay is a major issue faced by airline companies. Delay in the aircraft take off can lead to penalty and extra payment to airport authorities leading to revenue loss. The causes for delays can be weather, traffic queues or component issues. In this paper, we focus on the problem of delays due to component issues in the aircraft. In particular, this paper explores the analysis of aircraft delays based on health monitoring data from the aircraft. This paper analyzes and establishes the relationship between health monitoring data and the delay of the aircrafts using exploratory analytics, stochastic approaches and machine learning techniques.


2021 ◽  
Vol 11 (12) ◽  
pp. 5727
Author(s):  
Sifat Muin ◽  
Khalid M. Mosalam

Machine learning (ML)-aided structural health monitoring (SHM) can rapidly evaluate the safety and integrity of the aging infrastructure following an earthquake. The conventional damage features used in ML-based SHM methodologies face the curse of dimensionality. This paper introduces low dimensional, namely, cumulative absolute velocity (CAV)-based features, to enable the use of ML for rapid damage assessment. A computer experiment is performed to identify the appropriate features and the ML algorithm using data from a simulated single-degree-of-freedom system. A comparative analysis of five ML models (logistic regression (LR), ordinal logistic regression (OLR), artificial neural networks with 10 and 100 neurons (ANN10 and ANN100), and support vector machines (SVM)) is performed. Two test sets were used where Set-1 originated from the same distribution as the training set and Set-2 came from a different distribution. The results showed that the combination of the CAV and the relative CAV with respect to the linear response, i.e., RCAV, performed the best among the different feature combinations. Among the ML models, OLR showed good generalization capabilities when compared to SVM and ANN models. Subsequently, OLR is successfully applied to assess the damage of two numerical multi-degree of freedom (MDOF) models and an instrumented building with CAV and RCAV as features. For the MDOF models, the damage state was identified with accuracy ranging from 84% to 97% and the damage location was identified with accuracy ranging from 93% to 97.5%. The features and the OLR models successfully captured the damage information for the instrumented structure as well. The proposed methodology is capable of ensuring rapid decision-making and improving community resiliency.


Computer ◽  
2016 ◽  
Vol 49 (11) ◽  
pp. 38-48 ◽  
Author(s):  
Shurouq Hijazi ◽  
Alex Page ◽  
Burak Kantarci ◽  
Tolga Soyata

2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Runzhi Zhang ◽  
Alejandro R. Walker ◽  
Susmita Datta

Abstract Background Composition of microbial communities can be location-specific, and the different abundance of taxon within location could help us to unravel city-specific signature and predict the sample origin locations accurately. In this study, the whole genome shotgun (WGS) metagenomics data from samples across 16 cities around the world and samples from another 8 cities were provided as the main and mystery datasets respectively as the part of the CAMDA 2019 MetaSUB “Forensic Challenge”. The feature selecting, normalization, three methods of machine learning, PCoA (Principal Coordinates Analysis) and ANCOM (Analysis of composition of microbiomes) were conducted for both the main and mystery datasets. Results Features selecting, combined with the machines learning methods, revealed that the combination of the common features was effective for predicting the origin of the samples. The average error rates of 11.93 and 30.37% of three machine learning methods were obtained for main and mystery datasets respectively. Using the samples from main dataset to predict the labels of samples from mystery dataset, nearly 89.98% of the test samples could be correctly labeled as “mystery” samples. PCoA showed that nearly 60% of the total variability of the data could be explained by the first two PCoA axes. Although many cities overlapped, the separation of some cities was found in PCoA. The results of ANCOM, combined with importance score from the Random Forest, indicated that the common “family”, “order” of the main-dataset and the common “order” of the mystery dataset provided the most efficient information for prediction respectively. Conclusions The results of the classification suggested that the composition of the microbiomes was distinctive across the cities, which could be used to identify the sample origins. This was also supported by the results from ANCOM and importance score from the RF. In addition, the accuracy of the prediction could be improved by more samples and better sequencing depth.


Increased attentiveness on the environmental and effects of aging, deterioration and extreme events on civil infrastructure has created the need for more advanced damage detection tools and structural health monitoring (SHM). Today, these tasks are performed by signal processing, visual inspection techniques along with traditional well known impedance based health monitoring EMI technique. New research areas have been explored that improves damage detection at incipient stage and when the damage is substantial. Addressing these issues at early age prevents catastrophe situation for the safety of human lives. To improve the existing damage detection newly developed techniques in conjugation with EMI innovative new sensors, signal processing and soft computing techniques are discussed in details this paper. The advanced techniques (soft computing, signal processing, visual based, embedded IOT) are employed as a global method in prediction, to identify, locate, optimize, the damage area and deterioration. The amount and severity, multiple cracks on civil infrastructure like concrete and RC structures (beams and bridges) using above techniques along with EMI technique and use of PZT transducer. In addition to survey advanced innovative signal processing, machine learning techniques civil infrastructure connected to IOT that can make infrastructure smart and increases its efficiency that is aimed at socioeconomic, environmental and sustainable development.


2019 ◽  
Author(s):  
Zhenzhen Du ◽  
Yujie Yang ◽  
Jing Zheng ◽  
Qi Li ◽  
Denan Lin ◽  
...  

BACKGROUND Predictions of cardiovascular disease risks based on health records have long attracted broad research interests. Despite extensive efforts, the prediction accuracy has remained unsatisfactory. This raises the question as to whether the data insufficiency, statistical and machine-learning methods, or intrinsic noise have hindered the performance of previous approaches, and how these issues can be alleviated. OBJECTIVE Based on a large population of patients with hypertension in Shenzhen, China, we aimed to establish a high-precision coronary heart disease (CHD) prediction model through big data and machine-learning METHODS Data from a large cohort of 42,676 patients with hypertension, including 20,156 patients with CHD onset, were investigated from electronic health records (EHRs) 1-3 years prior to CHD onset (for CHD-positive cases) or during a disease-free follow-up period of more than 3 years (for CHD-negative cases). The population was divided evenly into independent training and test datasets. Various machine-learning methods were adopted on the training set to achieve high-accuracy prediction models and the results were compared with traditional statistical methods and well-known risk scales. Comparison analyses were performed to investigate the effects of training sample size, factor sets, and modeling approaches on the prediction performance. RESULTS An ensemble method, XGBoost, achieved high accuracy in predicting 3-year CHD onset for the independent test dataset with an area under the receiver operating characteristic curve (AUC) value of 0.943. Comparison analysis showed that nonlinear models (K-nearest neighbor AUC 0.908, random forest AUC 0.938) outperform linear models (logistic regression AUC 0.865) on the same datasets, and machine-learning methods significantly surpassed traditional risk scales or fixed models (eg, Framingham cardiovascular disease risk models). Further analyses revealed that using time-dependent features obtained from multiple records, including both statistical variables and changing-trend variables, helped to improve the performance compared to using only static features. Subpopulation analysis showed that the impact of feature design had a more significant effect on model accuracy than the population size. Marginal effect analysis showed that both traditional and EHR factors exhibited highly nonlinear characteristics with respect to the risk scores. CONCLUSIONS We demonstrated that accurate risk prediction of CHD from EHRs is possible given a sufficiently large population of training data. Sophisticated machine-learning methods played an important role in tackling the heterogeneity and nonlinear nature of disease prediction. Moreover, accumulated EHR data over multiple time points provided additional features that were valuable for risk prediction. Our study highlights the importance of accumulating big data from EHRs for accurate disease predictions.


2021 ◽  
Vol 9 ◽  
Author(s):  
Huanhuan Zhao ◽  
Xiaoyu Zhang ◽  
Yang Xu ◽  
Lisheng Gao ◽  
Zuchang Ma ◽  
...  

Hypertension is a widespread chronic disease. Risk prediction of hypertension is an intervention that contributes to the early prevention and management of hypertension. The implementation of such intervention requires an effective and easy-to-implement hypertension risk prediction model. This study evaluated and compared the performance of four machine learning algorithms on predicting the risk of hypertension based on easy-to-collect risk factors. A dataset of 29,700 samples collected through a physical examination was used for model training and testing. Firstly, we identified easy-to-collect risk factors of hypertension, through univariate logistic regression analysis. Then, based on the selected features, 10-fold cross-validation was utilized to optimize four models, random forest (RF), CatBoost, MLP neural network and logistic regression (LR), to find the best hyper-parameters on the training set. Finally, the performance of models was evaluated by AUC, accuracy, sensitivity and specificity on the test set. The experimental results showed that the RF model outperformed the other three models, and achieved an AUC of 0.92, an accuracy of 0.82, a sensitivity of 0.83 and a specificity of 0.81. In addition, Body Mass Index (BMI), age, family history and waist circumference (WC) are the four primary risk factors of hypertension. These findings reveal that it is feasible to use machine learning algorithms, especially RF, to predict hypertension risk without clinical or genetic data. The technique can provide a non-invasive and economical way for the prevention and management of hypertension in a large population.


2021 ◽  
Vol 82 (8) ◽  
pp. 1293-1320
Author(s):  
P. A. Mukhachev ◽  
T. R. Sadretdinov ◽  
D. A. Pritykin ◽  
A. B. Ivanov ◽  
S. V. Solov’ev

Sign in / Sign up

Export Citation Format

Share Document