scholarly journals Damage Detection and Isolation from Limited Experimental Data Using Simple Simulations and Knowledge Transfer

Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 80
Author(s):  
Asif Khan ◽  
Jun-Sik Kim ◽  
Heung Soo Kim

A simulation model can provide insight into the characteristic behaviors of different health states of an actual system; however, such a simulation cannot account for all complexities in the system. This work proposes a transfer learning strategy that employs simple computer simulations for fault diagnosis in an actual system. A simple shaft-disk system was used to generate a substantial set of source data for three health states of a rotor system, and that data was used to train, validate, and test a customized deep neural network. The deep learning model, pretrained on simulation data, was used as a domain and class invariant generalized feature extractor, and the extracted features were processed with traditional machine learning algorithms. The experimental data sets of an RK4 rotor kit and a machinery fault simulator (MFS) were employed to assess the effectiveness of the proposed approach. The proposed method was also validated by comparing its performance with the pre-existing deep learning models of GoogleNet, VGG16, ResNet18, AlexNet, and SqueezeNet in terms of feature extraction, generalizability, computational cost, and size and parameters of the networks.

2022 ◽  
pp. 27-50
Author(s):  
Rajalaxmi Prabhu B. ◽  
Seema S.

A lot of user-generated data is available these days from huge platforms, blogs, websites, and other review sites. These data are usually unstructured. Analyzing sentiments from these data automatically is considered an important challenge. Several machine learning algorithms are implemented to check the opinions from large data sets. A lot of research has been undergone in understanding machine learning approaches to analyze sentiments. Machine learning mainly depends on the data required for model building, and hence, suitable feature exactions techniques also need to be carried. In this chapter, several deep learning approaches, its challenges, and future issues will be addressed. Deep learning techniques are considered important in predicting the sentiments of users. This chapter aims to analyze the deep-learning techniques for predicting sentiments and understanding the importance of several approaches for mining opinions and determining sentiment polarity.


Author(s):  
Ziyue Zhang ◽  
A. Adam Ding ◽  
Yunsi Fei

Guessing entropy (GE) is a widely adopted metric that measures the average computational cost needed for a successful side-channel analysis (SCA). However, with current estimation methods where the evaluator has to average the correct key rank over many independent side-channel leakage measurement sets, full-key GE estimation is impractical due to its prohibitive computing requirement. A recent estimation method based on posterior probabilities, although scalable, is not accurate.We propose a new guessing entropy estimation algorithm (GEEA) based on theoretical distributions of the ranking score vectors. By discovering the relationship of GE with pairwise success rates and utilizing it, GEEA uses a sum of many univariate Gaussian probabilities instead of multi-variate Gaussian probabilities, significantly improving the computation efficiency.We show that GEEA is more accurate and efficient than all current GE estimations. To the best of our knowledge, it is the only practical full-key GE evaluation on given experimental data sets which the evaluator has access to. Moreover, it can accurately predict the GE for larger sizes than the experimental data sets, providing comprehensive security evaluation.


2021 ◽  
Vol 1 (3) ◽  
pp. 138-165
Author(s):  
Thomas Krause ◽  
Jyotsna Talreja Wassan ◽  
Paul Mc Kevitt ◽  
Haiying Wang ◽  
Huiru Zheng ◽  
...  

Metagenomics promises to provide new valuable insights into the role of microbiomes in eukaryotic hosts such as humans. Due to the decreasing costs for sequencing, public and private repositories for human metagenomic datasets are growing fast. Metagenomic datasets can contain terabytes of raw data, which is a challenge for data processing but also an opportunity for advanced machine learning methods like deep learning that require large datasets. However, in contrast to classical machine learning algorithms, the use of deep learning in metagenomics is still an exception. Regardless of the algorithms used, they are usually not applied to raw data but require several preprocessing steps. Performing this preprocessing and the actual analysis in an automated, reproducible, and scalable way is another challenge. This and other challenges can be addressed by adjusting known big data methods and architectures to the needs of microbiome analysis and DNA sequence processing. A conceptual architecture for the use of machine learning and big data on metagenomic data sets was recently presented and initially validated to analyze the rumen microbiome. The same architecture can be used for clinical purposes as is discussed in this paper.


Author(s):  
Željko Ivezi ◽  
Andrew J. Connolly ◽  
Jacob T. VanderPlas ◽  
Alexander Gray ◽  
Željko Ivezi ◽  
...  

This chapter describes basic concepts and tools for tractably performing the computations described in the rest of this book. The need for fast algorithms for such analysis subroutines is becoming increasingly important as modern data sets are approaching billions of objects. With such data sets, even analysis operations whose computational cost is linearly proportional to the size of the data set present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. For more sophisticated machine learning algorithms, the often worse-than-linear runtimes of straightforward implementations become quickly unbearable. The chapter looks at some techniques that can reduce such runtimes in a rigorous manner that does not sacrifice the accuracy of the analysis through unprincipled approximations. This is far more important than simply speeding up calculations: in practice, computational performance and statistical performance can be intimately linked. The ability of a researcher, within his or her effective time budget, to try more powerful models or to search parameter settings for each model in question, leads directly to better fits and predictions.


2020 ◽  
Author(s):  
Clayton Eduardo Rodrigues ◽  
Cairo Lúcio Nascimento Júnior ◽  
Domingos Alves Rade

A comparative analysis of machine learning techniques for rotating machine faults diagnosis based on vibration spectra images is presented. The feature extraction of dierent types of faults, such as unbalance, misalignment, shaft crack, rotor-stator rub, and hydrodynamic instability, is performed by processing the spectral image of vibration orbits acquired during the rotating machine run-up. The classiers are trained with simulation data and tested with both simulation and experimental data. The experimental data are obtained from measurements performed on an rotor-disk system test rig supported on hydrodynamic bearings. To generate the simulated data, a numerical model of the rotating system is developed using the Finite Element Method (FEM). Deep learning, ensemble and traditional classication methods are evaluated. The ability of the methods to generalize the image classication is evaluated based on their performance in classifying experimental test patterns that were not used during training. The obtained results suggest that despite considerable computational cost, the method based on Convolutional Neural Network (CNN) presents the best performance for classication of faults based on spectral images.


Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7760
Author(s):  
Yubiao Sun ◽  
Qiankun Sun ◽  
Kan Qin

It is the tradition for the fluid community to study fluid dynamics problems via numerical simulations such as finite-element, finite-difference and finite-volume methods. These approaches use various mesh techniques to discretize a complicated geometry and eventually convert governing equations into finite-dimensional algebraic systems. To date, many attempts have been made by exploiting machine learning to solve flow problems. However, conventional data-driven machine learning algorithms require heavy inputs of large labeled data, which is computationally expensive for complex and multi-physics problems. In this paper, we proposed a data-free, physics-driven deep learning approach to solve various low-speed flow problems and demonstrated its robustness in generating reliable solutions. Instead of feeding neural networks large labeled data, we exploited the known physical laws and incorporated this physics into a neural network to relax the strict requirement of big data and improve prediction accuracy. The employed physics-informed neural networks (PINNs) provide a feasible and cheap alternative to approximate the solution of differential equations with specified initial and boundary conditions. Approximate solutions of physical equations can be obtained via the minimization of the customized objective function, which consists of residuals satisfying differential operators, the initial/boundary conditions as well as the mean-squared errors between predictions and target values. This new approach is data efficient and can greatly lower the computational cost for large and complex geometries. The capacity and generality of the proposed method have been assessed by solving various flow and transport problems, including the flow past cylinder, linear Poisson, heat conduction and the Taylor–Green vortex problem.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jujuan Zhuang ◽  
Danyang Liu ◽  
Meng Lin ◽  
Wenjing Qiu ◽  
Jinyang Liu ◽  
...  

Background: Pseudouridine (Ψ) is a common ribonucleotide modification that plays a significant role in many biological processes. The identification of Ψ modification sites is of great significance for disease mechanism and biological processes research in which machine learning algorithms are desirable as the lab exploratory techniques are expensive and time-consuming.Results: In this work, we propose a deep learning framework, called PseUdeep, to identify Ψ sites of three species: H. sapiens, S. cerevisiae, and M. musculus. In this method, three encoding methods are used to extract the features of RNA sequences, that is, one-hot encoding, K-tuple nucleotide frequency pattern, and position-specific nucleotide composition. The three feature matrices are convoluted twice and fed into the capsule neural network and bidirectional gated recurrent unit network with a self-attention mechanism for classification.Conclusion: Compared with other state-of-the-art methods, our model gets the highest accuracy of the prediction on the independent testing data set S-200; the accuracy improves 12.38%, and on the independent testing data set H-200, the accuracy improves 0.68%. Moreover, the dimensions of the features we derive from the RNA sequences are only 109,109, and 119 in H. sapiens, M. musculus, and S. cerevisiae, which is much smaller than those used in the traditional algorithms. On evaluation via tenfold cross-validation and two independent testing data sets, PseUdeep outperforms the best traditional machine learning model available. PseUdeep source code and data sets are available at https://github.com/dan111262/PseUdeep.


2020 ◽  
Vol 8 (5) ◽  
pp. 1160-1166

In this paper existing writing for computer added diagnosis (CAD) based identification of lesions that might be connected in the early finding of Diabetic Retinopathy (DR) is talked about. The recognition of sores, for example, Microaneurysms (MA), Hemorrhages (HEM) and Exudates (EX) are incorporated in this paper. A range of methodologies starting from conventional morphology to deep learning techniques have been discussed. The different strategies like hand crafted feature extraction to automated CNN based component extraction, single lesion identification to multi sore recognition have been explored. The different stages in each methods beginning from the image preprocessing to classification stage are investigated. The exhibition of the proposed strategies are outlined by various performance measurement parameters and their used data sets are tabulated. Toward the end we examined the future headings.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Edeh Michael Onyema ◽  
Piyush Kumar Shukla ◽  
Surjeet Dalal ◽  
Mayuri Neeraj Mathur ◽  
Mohammed Zakariah ◽  
...  

The use of machine learning algorithms for facial expression recognition and patient monitoring is a growing area of research interest. In this study, we present a technique for facial expression recognition based on deep learning algorithm: convolutional neural network (ConvNet). Data were collected from the FER2013 dataset that contains samples of seven universal facial expressions for training. The results show that the presented technique improves facial expression recognition accuracy without encoding several layers of CNN that lead to a computationally costly model. This study proffers solutions to the issues of high computational cost due to the implementation of facial expression recognition by providing a model close to the accuracy of the state-of-the-art model. The study concludes that deep l\earning-enabled facial expression recognition techniques enhance accuracy, better facial recognition, and interpretation of facial expressions and features that promote efficiency and prediction in the health sector.


2018 ◽  
Vol 14 (2) ◽  
pp. 127-138
Author(s):  
Asif Banka ◽  
Roohie Mir

The advancements in modern day computing and architectures focus on harnessing parallelism and achieve high performance computing resulting in generation of massive amounts of data. The information produced needs to be represented and analyzed to address various challenges in technology and business domains. Radical expansion and integration of digital devices, networking, data storage and computation systems are generating more data than ever. Data sets are massive and complex, hence traditional learning methods fail to rescue the researchers and have in turn resulted in adoption of machine learning techniques to provide possible solutions to mine the information hidden in unseen data. Interestingly, deep learning finds its place in big data applications. One of major advantages of deep learning is that it is not human engineered. In this paper, we look at various machine learning algorithms that have already been applied to big data related problems and have shown promising results. We also look at deep learning as a rescue and solution to big data issues that are not efficiently addressed using traditional methods. Deep learning is finding its place in most applications where we come across critical and dominating 5Vs of big data and is expected to perform better.


Sign in / Sign up

Export Citation Format

Share Document