Deep learning for high-dimensional reliability analysis

2020 ◽  
Vol 139 ◽  
pp. 106399 ◽  
Author(s):  
Mingyang Li ◽  
Zequn Wang
2021 ◽  
Vol 15 (8) ◽  
pp. 898-911
Author(s):  
Yongqing Zhang ◽  
Jianrong Yan ◽  
Siyu Chen ◽  
Meiqin Gong ◽  
Dongrui Gao ◽  
...  

Rapid advances in biological research over recent years have significantly enriched biological and medical data resources. Deep learning-based techniques have been successfully utilized to process data in this field, and they have exhibited state-of-the-art performances even on high-dimensional, nonstructural, and black-box biological data. The aim of the current study is to provide an overview of the deep learning-based techniques used in biology and medicine and their state-of-the-art applications. In particular, we introduce the fundamentals of deep learning and then review the success of applying such methods to bioinformatics, biomedical imaging, biomedicine, and drug discovery. We also discuss the challenges and limitations of this field, and outline possible directions for further research.


Author(s):  
Chong Chen ◽  
Ying Liu ◽  
Xianfang Sun ◽  
Shixuan Wang ◽  
Carla Di Cairano-Gilfedder ◽  
...  

Over the last few decades, reliability analysis has gained more and more attention as it can be beneficial in lowering the maintenance cost. Time between failures (TBF) is an essential topic in reliability analysis. If the TBF can be accurately predicted, preventive maintenance can be scheduled in advance in order to avoid critical failures. The purpose of this paper is to research the TBF using deep learning techniques. Deep learning, as a tool capable of capturing the highly complex and nonlinearly patterns, can be a useful tool for TBF prediction. The general principle of how to design deep learning model was introduced. By using a sizeable amount of automobile TBF dataset, we conduct an experiential study on TBF prediction by deep learning and several data mining approaches. The empirical results show the merits of deep learning in performance but comes with cost of high computational load.


2018 ◽  
Vol 24 (4) ◽  
pp. 225-247 ◽  
Author(s):  
Xavier Warin

Abstract A new method based on nesting Monte Carlo is developed to solve high-dimensional semi-linear PDEs. Depending on the type of non-linearity, different schemes are proposed and theoretically studied: variance error are given and it is shown that the bias of the schemes can be controlled. The limitation of the method is that the maturity or the Lipschitz constants of the non-linearity should not be too high in order to avoid an explosion of the computational time. Many numerical results are given in high dimension for cases where analytical solutions are available or where some solutions can be computed by deep-learning methods.


Author(s):  
Zequn Wang ◽  
Mingyang Li

Abstract Conventional uncertainty quantification methods usually lacks the capability of dealing with high-dimensional problems due to the curse of dimensionality. This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis. An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space, which contains a distinguishable failure surface. Then a deep feedforward neural network (DFN) is utilized to learn the mapping relationship and reconstruct the latent space, while the Gaussian process (GP) modeling technique is used to build the surrogate model of the transformed limit state function. During the training process of the DFN, the discrepancy between the actual and reconstructed latent space is minimized through semi-supervised learning for ensuring the accuracy. Both labeled and unlabeled samples are utilized for defining the loss function of the DFN. Evolutionary algorithm is adopted to train the DFN, then the Monte Carlo simulation method is used for uncertainty quantification and reliability analysis based on the proposed framework. The effectiveness is demonstrated through a mathematical example.


2021 ◽  
Author(s):  
R. Priyadarshini ◽  
K. Anuratha ◽  
N. Rajendran ◽  
S. Sujeetha

Anamoly is an uncommon and it represents an outlier i.e, a nonconforming case. According to Oxford Dictionary of Mathematics anamoly is defined as an unusal and erroneous observation that usually doesn’t follow the general pattern of drawn population. The process of detecting the anmolies is a process of data mining and it aims at finding the data points or patterns that do not adapt with the actual complete pattern of the data.The study on anamoly behavior and its impact has been done on areas such as Network Security, Finance, Healthcare and Earth Sciences etc. The proper detection and prediction of anamolies are of great importance as these rare observations may carry siginificant information. In today’s finanicial world, the enterprise data is digitized and stored in the cloudand so there is a significant need to detect the anaomalies in financial data which will help the enterprises to deal with the huge amount of auditing The corporate and enterprise is conducting auidts on large number of ledgers and journal entries. The monitoring of those kinds of auidts is performed manually most of the times. There should be proper anamoly detection in the high dimensional data published in the ledger format for auditing purpose. This work aims at analyzing and predicting unusal fraudulent financial transations by emplyoing few Machine Learning and Deep Learning Methods. Even if any of the anamoly like manipulation or tampering of data detected, such anamolies and errors can be identified and marked with proper proof with the help of the machine learning based algorithms. The accuracy of the prediction is increased by 7% by implementing the proposed prediction models.


2019 ◽  
Author(s):  
Derek M Mason ◽  
Simon Friedensohn ◽  
Cédric R Weber ◽  
Christian Jordi ◽  
Bastian Wagner ◽  
...  

ABSTRACTTherapeutic antibody optimization is time and resource intensive, largely because it requires low-throughput screening (103 variants) of full-length IgG in mammalian cells, typically resulting in only a few optimized leads. Here, we use deep learning to interrogate and predict antigen-specificity from a massively diverse sequence space to identify globally optimized antibody variants. Using a mammalian display platform and the therapeutic antibody trastuzumab, rationally designed site-directed mutagenesis libraries are introduced by CRISPR/Cas9-mediated homology-directed repair (HDR). Screening and deep sequencing of relatively small libraries (104) produced high quality data capable of training deep neural networks that accurately predict antigen-binding based on antibody sequence. Deep learning is then used to predict millions of antigen binders from an in silico library of ~108 variants, where experimental testing of 30 randomly selected variants showed all 30 retained antigen specificity. The full set of in silico predicted binders is then subjected to multiple developability filters, resulting in thousands of highly-optimized lead candidates. With its scalability and capacity to interrogate high-dimensional protein sequence space, deep learning offers great potential for antibody engineering and optimization.


Sign in / Sign up

Export Citation Format

Share Document