scholarly journals Improved Accuracy in Predicting the Best Sensor Fusion Architecture for Multiple Domains

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7007
Author(s):  
Erik Molino-Minero-Re ◽  
Antonio A. Aguileta ◽  
Ramon F. Brena ◽  
Enrique Garcia-Ceja

Multi-sensor fusion intends to boost the general reliability of a decision-making procedure or allow one sensor to compensate for others’ shortcomings. This field has been so prominent that authors have proposed many different fusion approaches, or “architectures” as we call them when they are structurally different, so it is now challenging to prescribe which one is better for a specific collection of sensors and a particular application environment, other than by trial and error. We propose an approach capable of predicting the best fusion architecture (from predefined options) for a given dataset. This method involves the construction of a meta-dataset where statistical characteristics from the original dataset are extracted. One challenge is that each dataset has a different number of variables (columns). Previous work took the principal component analysis’s first k components to make the meta-dataset columns coherent and trained machine learning classifiers to predict the best fusion architecture. In this paper, we take a new route to build the meta-dataset. We use the Sequential Forward Floating Selection algorithm and a T transform to reduce the features and match them to a given number, respectively. Our findings indicate that our proposed method could improve the accuracy in predicting the best sensor fusion architecture for multiple domains.

2021 ◽  
Vol 11 (14) ◽  
pp. 6370
Author(s):  
Elena Quatrini ◽  
Francesco Costantino ◽  
David Mba ◽  
Xiaochuan Li ◽  
Tat-Hean Gan

The water purification process is becoming increasingly important to ensure the continuity and quality of subsequent production processes, and it is particularly relevant in pharmaceutical contexts. However, in this context, the difficulties arising during the monitoring process are manifold. On the one hand, the monitoring process reveals various discontinuities due to different characteristics of the input water. On the other hand, the monitoring process is discontinuous and random itself, thus not guaranteeing continuity of the parameters and hindering a straightforward analysis. Consequently, further research on water purification processes is paramount to identify the most suitable techniques able to guarantee good performance. Against this background, this paper proposes an application of kernel principal component analysis for fault detection in a process with the above-mentioned characteristics. Based on the temporal variability of the process, the paper suggests the use of past and future matrices as input for fault detection as an alternative to the original dataset. In this manner, the temporal correlation between process parameters and machine health is accounted for. The proposed approach confirms the possibility of obtaining very good monitoring results in the analyzed context.


2018 ◽  
Vol 7 (8) ◽  
pp. 223 ◽  
Author(s):  
Zhidong Zhao ◽  
Yang Zhang ◽  
Yanjun Deng

Continuous monitoring of the fetal heart rate (FHR) signal has been widely used to allow obstetricians to obtain detailed physiological information about newborns. However, visual interpretation of FHR traces causes inter-observer and intra-observer variability. Therefore, this study proposed a novel computerized analysis software of the FHR signal (CAS-FHR), aimed at providing medical decision support. First, to the best of our knowledge, the software extracted the most comprehensive features (47) from different domains, including morphological, time, and frequency and nonlinear domains. Then, for the intelligent assessment of fetal state, three representative machine learning algorithms (decision tree (DT), support vector machine (SVM), and adaptive boosting (AdaBoost)) were chosen to execute the classification stage. To improve the performance, feature selection/dimensionality reduction methods (statistical test (ST), area under the curve (AUC), and principal component analysis (PCA)) were designed to determine informative features. Finally, the experimental results showed that AdaBoost had stronger classification ability, and the performance of the selected feature set using ST was better than that of the original dataset with accuracies of 92% and 89%, sensitivities of 92% and 89%, specificities of 90% and 88%, and F-measures of 95% and 92%, respectively. In summary, the results proved the effectiveness of our proposed approach involving the comprehensive analysis of the FHR signal for the intelligent prediction of fetal asphyxia accurately in clinical practice.


2014 ◽  
Vol 783-786 ◽  
pp. 2188-2193 ◽  
Author(s):  
I. Toda-Caraballo ◽  
Enrique I. Galindo-Nava ◽  
Pedro E.J. Rivera-Díaz-del-Castillo

Traditionally, the discovery of new materials has been the result of a trial and error process. This has resulted in an extremely time-consuming and expensive process. Models for guiding the discovery of new materials have been developed within the European Accelerated Metallurgy project. The application of statistical techniques to large materials datasets has lead to the discovery of unexpected regularities among their properties. This work focuses on mechanical properties. In particular, the interplay between yield strength, ultimate tensile strength and elongation. A methodology based on principal component analysis, and Kocks-Mecking modelling has led to a tool for finding optimal compositional and heat treatment scenarios. The model is first presented for wide ranges of alloys, and the application to the discovery of new magnesium and ferrous alloys is outlined.


2013 ◽  
Vol 80 (3) ◽  
pp. 335-343 ◽  
Author(s):  
Bettina Miekley ◽  
Imke Traulsen ◽  
Joachim Krieter

This investigation analysed the applicability of principal component analysis (PCA), a latent variable method, for the early detection of mastitis and lameness. Data used were recorded on the Karkendamm dairy research farm between August 2008 and December 2010. For mastitis and lameness detection, data of 338 and 315 cows in their first 200 d in milk were analysed, respectively. Mastitis as well as lameness were specified according to veterinary treatments. Diseases were defined as disease blocks. The different definitions used (two for mastitis, three for lameness) varied solely in the sequence length of the blocks. Only the days before the treatment were included in the blocks. Milk electrical conductivity, milk yield and feeding patterns (feed intake, number of feeding visits and time at the trough) were used for recognition of mastitis. Pedometer activity and feeding patterns were utilised for lameness detection. To develop and verify the PCA model, the mastitis and the lameness datasets were divided into training and test datasets. PCA extracted uncorrelated principle components (PC) by linear transformations of the raw data so that the first few PCs captured most of the variations in the original dataset. For process monitoring and disease detection, these resulting PCs were applied to the Hotelling's T2 chart and to the residual control chart. The results show that block sensitivity of mastitis detection ranged from 77·4 to 83·3%, whilst specificity was around 76·7%. The error rates were around 98·9%. For lameness detection, the block sensitivity ranged from 73·8 to 87·8% while the obtained specificities were between 54·8 and 61·9%. The error rates varied from 87·8 to 89·2%. In conclusion, PCA seems to be not yet transferable into practical usage. Results could probably be improved if different traits and more informative sensor data are included in the analysis.


Energies ◽  
2019 ◽  
Vol 12 (2) ◽  
pp. 218 ◽  
Author(s):  
Nan Wei ◽  
Changjun Li ◽  
Jiehao Duan ◽  
Jinyuan Liu ◽  
Fanhua Zeng

Forecasting daily natural gas load accurately is difficult because it is affected by various factors. A large number of redundant factors existing in the original dataset will increase computational complexity and decrease the accuracy of forecasting models. This study aims to provide accurate forecasting of natural gas load using a deep learning (DL)-based hybrid model, which combines principal component correlation analysis (PCCA) and (LSTM) network. PCCA is an improved principal component analysis (PCA) and is first proposed here in this paper. Considering the correlation between components in the eigenspace, PCCA can not only extract the components that affect natural gas load but also remove the redundant components. LSTM is a famous DL network, and it was used to predict daily natural gas load in our work. The proposed model was validated by using recent natural gas load data from Xi’an (China) and Athens (Greece). Additionally, 14 weather factors were introduced into the input dataset of the forecasting model. The results showed that PCCA–LSTM demonstrated better performance compared with LSTM, PCA–LSTM, back propagation neural network (BPNN), and support vector regression (SVR). The lowest mean absolute percentage errors of PCCA–LSTM were 3.22% and 7.29% for Xi’an and Athens, respectively. On these bases, the proposed model can be regarded as an accurate and robust model for daily natural gas load forecasting.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2516
Author(s):  
Chunhua Ju ◽  
Qiuyang Gu ◽  
Gongxing Wu ◽  
Shuangzhu Zhang

Although the Crowd-Sensing perception system brings great data value to people through the release and analysis of high-dimensional perception data, it causes great hidden danger to the privacy of participants in the meantime. Currently, various privacy protection methods based on differential privacy have been proposed, but most of them cannot simultaneously solve the complex attribute association problem between high-dimensional perception data and the privacy threat problems from untrustworthy servers. To address this problem, we put forward a local privacy protection based on Bayes network for high-dimensional perceptual data in this paper. This mechanism realizes the local data protection of the users at the very beginning, eliminates the possibility of other parties directly accessing the user’s original data, and fundamentally protects the user’s data privacy. During this process, after receiving the data of the user’s local privacy protection, the perception server recognizes the dimensional correlation of the high-dimensional data based on the Bayes network, divides the high-dimensional data attribute set into multiple relatively independent low-dimensional attribute sets, and then sequentially synthesizes the new dataset. It can effectively retain the attribute dimension correlation of the original perception data, and ensure that the synthetic dataset and the original dataset have as similar statistical characteristics as possible. To verify its effectiveness, we conduct a multitude of simulation experiments. Results have shown that the synthetic data of this mechanism under the effective local privacy protection has relatively high data utility.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Xiaoming Xu ◽  
Chenglin Wen

In traditional principle component analysis (PCA), because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs) often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.


2018 ◽  
Vol 47 (1) ◽  
pp. 51-61
Author(s):  
Mostafa Bahrami ◽  
Hossein Javadikia ◽  
Ebrahim Ebrahimi

This study presents an approach to intelligent fault prediction based on time-domain and frequency-domain (FFT phase angle and PSD) statistical analysis, Principal component analysis (PCA) and adaptive Neuro-fuzzy inference system (ANFIS). After vibration data acquisition, the approach consists of three stages is conducted. First, different features, including time-domain statistical characteristics, and frequency-domain statistical characteristics are extracted to get more fault detection information. Second, three components by a principal component analysis are obtained from the original feature set. Finally, these three components are inputted into ANFIS for a development model of identifying different abnormal cases. The proposed approach is applied to fault diagnosis of gearbox's number one gear of MF285 tractor, and the testing results show that the proposed model can reliably predict different fault categories and severities.


Sign in / Sign up

Export Citation Format

Share Document