Distributed Training for Data Driven Models in Power Machinery Online Monitoring

Author(s):  
Hang Wu ◽  
Wei Wang ◽  
Dengji Zhou ◽  
Shixi Ma ◽  
Huisheng Zhang

Abstract Prognostics and Health Management (PHM) Systems have been widely applied in online monitoring of modern mechanical devices, and models in PHM systems are of vital importance. Traditional mechanism models are able to simulate parameters and evaluate the overall machine state accurately, but their defect is the high requirements in understanding the objects. Therefore, data driven models could be a suitable substitute. Data driven models take the advantage of machine learning technique and are able to be established only on the basis of past data. Models applied in this paper are based on Artificial Neural Network (ANN), but the complexity of network structures will occupy huge amount of computing resources and time in training. For online monitoring, spending too much time on training will postpone real time prediction and decrease the reaction speed of PHM systems in abnormal conditions. In this paper distributed training based on computer cluster is proposed to decrease training time. By dispatching computing task into several workers, the severs in the cluster will only undertake a small fraction of the total computing load and therefore accelerate the overall training process. On the other hand, there are some risks of losing prediction accuracy and stability because of the nonlinear gradient deviation in data parallelism. Aggregation period (AP) is an important factor in balancing the requirements of both ends. This paper analyzes the influence of AP on training speed, accuracy and stability, then proposes a novel distributed training algorism according to the regulations achieved. Then a distributed training and online monitoring process for a typical two-shaft gas turbine is taken as an example in the result and discussion section. It turns out that the experimental results fit the theoretical regulations well, and the revised distributed learning algorism is able to meet all the requirements of training speed, prediction accuracy and stability.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Shawn Gu ◽  
Tijana Milenković

Abstract Background Network alignment (NA) can transfer functional knowledge between species’ conserved biological network regions. Traditional NA assumes that it is topological similarity (isomorphic-like matching) between network regions that corresponds to the regions’ functional relatedness. However, we recently found that functionally unrelated proteins are as topologically similar as functionally related proteins. So, we redefined NA as a data-driven method called TARA, which learns from network and protein functional data what kind of topological relatedness (rather than similarity) between proteins corresponds to their functional relatedness. TARA used topological information (within each network) but not sequence information (between proteins across networks). Yet, TARA yielded higher protein functional prediction accuracy than existing NA methods, even those that used both topological and sequence information. Results Here, we propose TARA++ that is also data-driven, like TARA and unlike other existing methods, but that uses across-network sequence information on top of within-network topological information, unlike TARA. To deal with the within-and-across-network analysis, we adapt social network embedding to the problem of biological NA. TARA++ outperforms protein functional prediction accuracy of existing methods. Conclusions As such, combining research knowledge from different domains is promising. Overall, improvements in protein functional prediction have biomedical implications, for example allowing researchers to better understand how cancer progresses or how humans age.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 460
Author(s):  
Samuel Yen-Chi Chen ◽  
Shinjae Yoo

Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is located. One of the potential schemes to achieve this property is the federated learning (FL), which consists of several clients or local nodes learning on their own data and a central node to aggregate the models collected from those local nodes. However, to the best of our knowledge, no work has been done in quantum machine learning (QML) in federation setting yet. In this work, we present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training. It demonstrates a promising future research direction for scaling and privacy aspects.


Author(s):  
Zhimin Xi ◽  
Rong Jing ◽  
Pingfeng Wang ◽  
Chao Hu

This paper develops a Copula-based sampling method for data-driven prognostics and health management (PHM). The principal idea is to first build statistical relationship between failure time and the time realizations at specified degradation levels on the basis of off-line training data sets, then identify possible failure times for on-line testing units based on the constructed statistical model and available on-line testing data. Specifically, three technical components are proposed to implement the methodology. First of all, a generic health index system is proposed to represent the health degradation of engineering systems. Next, a Copula-based modeling is proposed to build statistical relationship between failure time and the time realizations at specified degradation levels. Finally, a sampling approach is proposed to estimate the failure time and remaining useful life (RUL) of on-line testing units. Two case studies, including a bearing system in electric cooling fans and a 2008 IEEE PHM challenge problem, are employed to demonstrate the effectiveness of the proposed methodology.


2021 ◽  
Vol 251 ◽  
pp. 02054
Author(s):  
Olga Sunneborn Gudnadottir ◽  
Daniel Gedon ◽  
Colin Desmarais ◽  
Karl Bengtsson Bernander ◽  
Raazesh Sainudiin ◽  
...  

In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.


2021 ◽  
pp. 739-746
Author(s):  
Xiaoming Xie ◽  
Tengfei Zhang ◽  
Qingyu Zhu ◽  
Guigang Zhang

Author(s):  
Hao Yu ◽  
Sen Yang ◽  
Shenghuo Zhu

In distributed training of deep neural networks, parallel minibatch SGD is widely used to speed up the training process by using multiple workers. It uses multiple workers to sample local stochastic gradients in parallel, aggregates all gradients in a single server to obtain the average, and updates each worker’s local model using a SGD update with the averaged gradient. Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker. However, such linear scalability in practice is significantly limited by the growing demand for gradient communication as more workers are involved. Model averaging, which periodically averages individual models trained over parallel workers, is another common practice used for distributed training of deep neural networks since (Zinkevich et al. 2010) (McDonald, Hall, and Mann 2010). Compared with parallel mini-batch SGD, the communication overhead of model averaging is significantly reduced. Impressively, tremendous experimental works have verified that model averaging can still achieve a good speed-up of the training time as long as the averaging interval is carefully controlled. However, it remains a mystery in theory why such a simple heuristic works so well. This paper provides a thorough and rigorous theoretical study on why model averaging can work as well as parallel mini-batch SGD with significantly less communication overhead.


2019 ◽  
Vol 29 (11n12) ◽  
pp. 1801-1818
Author(s):  
Yixiao Yang ◽  
Xiang Chen ◽  
Jiaguang Sun

In last few years, applying language model to source code is the state-of-the-art method for solving the problem of code completion. However, compared with natural language, code has more obvious repetition characteristics. For example, a variable can be used many times in the following code. Variables in source code have a high chance to be repetitive. Cloned code and templates, also have the property of token repetition. Capturing the token repetition of source code is important. In different projects, variables or types are usually named differently. This means that a model trained in a finite data set will encounter a lot of unseen variables or types in another data set. How to model the semantics of the unseen data and how to predict the unseen data based on the patterns of token repetition are two challenges in code completion. Hence, in this paper, token repetition is modelled as a graph, we propose a novel REP model which is based on deep neural graph network to learn the code toke repetition. The REP model is to identify the edge connections of a graph to recognize the token repetition. For predicting the token repetition of token [Formula: see text], the information of all the previous tokens needs to be considered. We use memory neural network (MNN) to model the semantics of each distinct token to make the framework of REP model more targeted. The experiments indicate that the REP model performs better than LSTM model. Compared with Attention-Pointer network, we also discover that the attention mechanism does not work in all situations. The proposed REP model could achieve similar or slightly better prediction accuracy compared to Attention-Pointer network and consume less training time. We also find other attention mechanism which could further improve the prediction accuracy.


Sign in / Sign up

Export Citation Format

Share Document