scholarly journals Deep Learning-Based Accuracy Upgrade of Reduced Order Models in Topology Optimization

2021 ◽  
Vol 11 (24) ◽  
pp. 12005
Author(s):  
Nikos Ath. Kallioras ◽  
Alexandros N. Nordas ◽  
Nikos D. Lagaros

Topology optimization problems pose substantial requirements in computing resources, which become prohibitive in cases of large-scale design domains discretized with fine finite element meshes. A Deep Learning-assisted Topology OPtimization (DLTOP) methodology was previously developed by the authors, which employs deep learning techniques to predict the optimized system configuration, thus substantially reducing the required computational effort of the optimization algorithm and overcoming potential bottlenecks. Building upon DLTOP, this study presents a novel Deep Learning-based Model Upgrading (DLMU) scheme. The scheme utilizes reduced order (surrogate) modeling techniques, which downscale complex models while preserving their original behavioral characteristics, thereby reducing the computational demand with limited impact on accuracy. The novelty of DLMU lies in the employment of deep learning for extrapolating the results of optimized reduced order models to an optimized fully refined model of the design domain, thus achieving a remarkable reduction of the computational demand in comparison with DLTOP and other existing techniques. The effectiveness, accuracy and versatility of the novel DLMU scheme are demonstrated via its application to a series of benchmark topology optimization problems from the literature.

2013 ◽  
Vol 11 (4) ◽  
pp. 2-16
Author(s):  
K. Perev

Abstract This paper considers the problem of orthogonal polynomial approximation based balanced truncation for a lowpass filter. The proposed method combines the system properties of balanced truncation, the computational effectiveness of proper orthogonal decomposition and the approximation capability of the orthogonal polynomials approximation. Orthogonal polynomials series expansion of the reachability and observability gramians is used in order to avoid solving large-scale Lyapunov equations and thus, significantly reducing the computational effort for obtaining the balancing transformation. The proposed method is applied for model reduction of a lowpass analog filter. Different sets of orthonormal functions are obtained from Legendre, Laguerre and Chebyshev orthogonal polynomials and the corresponding reduced order models are compared. The approximation precision is measured by the relative mean square error between the outputs of the full order model and the obtained reduced order models.


2021 ◽  
Author(s):  
Manyu Xiao ◽  
Jun Ma ◽  
Dongcheng Lu ◽  
Balaji Raghavan ◽  
Weihong Zhang

Abstract Most of the methods used today for handling local stress constraints in topology optimization, fail to directly address the non-self-adjointness of the stress-constrained topology optimization problem. This in turn could drastically raise the computational cost for an already large-scale problem. These problems involve both the equilibrium equations resulting from finite element analysis (FEA) in each iteration, as well as the adjoint equations from the sensitivity analysis of the stress constraints. In this work, we present a paradigm for large-scale stress-constrained topology optimization problems, where we build a multi-grid approach using an on-the-fly Reduced Order Model (ROM) and the p-norm aggregation function, in which the discrete reduced-order basis functions (modes) are adaptively constructed for both the primal and dual problems. In addition to reducing the computational savings due to the ROM, we also address the computational cost of the ROM learning and updating phases. Both reduced-order bases are enriched according to the residual threshold of the corresponding linear systems, and the grid resolution is adaptively selected based on the relative error in approximating the objective function and constraint values during the iteration. The tests on 2D and 3D benchmark problems demonstrate improved performance with acceptable objective and constraint violation errors. Finally, we thoroughly investigate the influence of relevant stress constraint parameters such as the coagulation factor, stress penalty factor, and the allowable stress value.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Juncai Li ◽  
Xiaofei Jiang

Molecular property prediction is an essential task in drug discovery. Most computational approaches with deep learning techniques either focus on designing novel molecular representation or combining with some advanced models together. However, researchers pay fewer attention to the potential benefits in massive unlabeled molecular data (e.g., ZINC). This task becomes increasingly challenging owing to the limitation of the scale of labeled data. Motivated by the recent advancements of pretrained models in natural language processing, the drug molecule can be naturally viewed as language to some extent. In this paper, we investigate how to develop the pretrained model BERT to extract useful molecular substructure information for molecular property prediction. We present a novel end-to-end deep learning framework, named Mol-BERT, that combines an effective molecular representation with pretrained BERT model tailored for molecular property prediction. Specifically, a large-scale prediction BERT model is pretrained to generate the embedding of molecular substructures, by using four million unlabeled drug SMILES (i.e., ZINC 15 and ChEMBL 27). Then, the pretrained BERT model can be fine-tuned on various molecular property prediction tasks. To examine the performance of our proposed Mol-BERT, we conduct several experiments on 4 widely used molecular datasets. In comparison to the traditional and state-of-the-art baselines, the results illustrate that our proposed Mol-BERT can outperform the current sequence-based methods and achieve at least 2% improvement on ROC-AUC score on Tox21, SIDER, and ClinTox dataset.


2021 ◽  
Author(s):  
Hepzibah Elizabeth David ◽  
K. Ramalakshmi ◽  
R. Venkatesan ◽  
G. Hemalatha

Tomato crops are infected with various diseases that impair tomato production. The recognition of the tomato leaf disease at an early stage protects the tomato crops from getting affected. In the present generation, the emerging deep learning techniques Convolutional Neural Network (CNNs), Recurrent Neural Network (RNNs), Long-Short Term Memory (LSTMs) has manifested significant progress in image classification, image identification, and Sequence Predictions. Thus by using these computer vision-based deep learning techniques, we developed a new method for automatic leaf disease detection. This proposed model is a robust technique for tomato leaf disease identification that gives accurate and better results than other traditional methods. Early tomato leaf disease detection is made possible by using the hybrid CNN-RNN architecture which utilizes less computational effort. In this paper, the required methods for implementing the disease recognition model with results are briefly explained. This paper also mentions the scope of developing more reliable and effective means of classifying and detecting all plant species.


Big data is large-scale data collected for knowledge discovery, it has been widely used in various applications. Big data often has image data from the various applications and requires effective technique to process data. In this paper, survey has been done in the big image data researches to analysis the effective performance of the methods. Deep learning techniques provides the effective performance compared to other methods included wavelet based methods. The deep learning techniques has the problem of requiring more computational time, and this can be overcome by lightweight methods.


2010 ◽  
Vol 2010 ◽  
pp. 1-16 ◽  
Author(s):  
Paulraj S. ◽  
Sumathi P.

The objective function and the constraints can be formulated as linear functions of independent variables in most of the real-world optimization problems. Linear Programming (LP) is the process of optimizing a linear function subject to a finite number of linear equality and inequality constraints. Solving linear programming problems efficiently has always been a fascinating pursuit for computer scientists and mathematicians. The computational complexity of any linear programming problem depends on the number of constraints and variables of the LP problem. Quite often large-scale LP problems may contain many constraints which are redundant or cause infeasibility on account of inefficient formulation or some errors in data input. The presence of redundant constraints does not alter the optimal solutions(s). Nevertheless, they may consume extra computational effort. Many researchers have proposed different approaches for identifying the redundant constraints in linear programming problems. This paper compares five of such methods and discusses the efficiency of each method by solving various size LP problems and netlib problems. The algorithms of each method are coded by using a computer programming language C. The computational results are presented and analyzed in this paper.


Author(s):  
Yilin Yan ◽  
Jonathan Chen ◽  
Mei-Ling Shyu

Stance detection is an important research direction which attempts to automatically determine the attitude (positive, negative, or neutral) of the author of text (such as tweets), towards a target. Nowadays, a number of frameworks have been proposed using deep learning techniques that show promising results in application domains such as automatic speech recognition and computer vision, as well as natural language processing (NLP). This article shows a novel deep learning-based fast stance detection framework in bipolar affinities on Twitter. It is noted that millions of tweets regarding Clinton and Trump were produced per day on Twitter during the 2016 United States presidential election campaign, and thus it is used as a test use case because of its significant and unique counter-factual properties. In addition, stance detection can be utilized to imply the political tendency of the general public. Experimental results show that the proposed framework achieves high accuracy results when compared to several existing stance detection methods.


Author(s):  
Nicolò Mazzi ◽  
Andreas Grothey ◽  
Ken McKinnon ◽  
Nagisa Sugishita

AbstractThis paper proposes an algorithm to efficiently solve large optimization problems which exhibit a column bounded block-diagonal structure, where subproblems differ in right-hand side and cost coefficients. Similar problems are often tackled using cutting-plane algorithms, which allow for an iterative and decomposed solution of the problem. When solving subproblems is computationally expensive and the set of subproblems is large, cutting-plane algorithms may slow down severely. In this context we propose two novel adaptive oracles that yield inexact information, potentially much faster than solving the subproblem. The first adaptive oracle is used to generate inexact but valid cutting planes, and the second adaptive oracle gives a valid upper bound of the true optimal objective. These two oracles progressively “adapt” towards the true exact oracle if provided with an increasing number of exact solutions, stored throughout the iterations. These adaptive oracles are embedded within a Benders-type algorithm able to handle inexact information. We compare the Benders with adaptive oracles against a standard Benders algorithm on a stochastic investment planning problem. The proposed algorithm shows the capability to substantially reduce the computational effort to obtain an $$\epsilon $$ ϵ -optimal solution: an illustrative case is 31.9 times faster for a $$1.00\%$$ 1.00 % convergence tolerance and 15.4 times faster for a $$0.01\%$$ 0.01 % tolerance.


Author(s):  
Sangram Redkar ◽  
S. C. Sinha

In this work, the basic problem of order reduction nonlinear systems subjected to an external periodic excitation is considered. This problem deserves attention because the modes that interact (linearly or nonlinearly) with the external excitation dominate the response. A linear approach like the Guyan reduction does not always guarantee accurate results, particularly when nonlinear interactions are strong. In order to overcome limitations of the linear approach, a nonlinear order reduction methodology through a generalization of the invariant manifold technique is proposed. Traditionally, the invariant manifold techniques for unforced problems are extended to the forced problems by ‘augmenting’ the state space, i.e., forcing is treated as an additional degree of freedom and an invariant manifold is constructed. However, in the approach suggested here a nonlinear time-dependent relationship between the dominant and the non-dominant states is assumed and the dimension of the state space remains the same. This methodology not only yields accurate reduced order models but also explains the consequences of various ‘primary’ and ‘secondary resonances’ present in the system. Following this approach, various ‘reducibility conditions’ are obtained that show interactions among the eigenvalues, the nonlinearities and the external excitation. One can also recover all ‘resonance conditions’ commonly obtained via perturbation or averaging techniques. These methodologies are applied to some typical problems and results for large-scale and reduced order models are compared. It is anticipated that these techniques will provide a useful tool in the analysis and control of large-scale externally excited nonlinear systems.


Sign in / Sign up

Export Citation Format

Share Document