scholarly journals Towards Reusable Surrogate Models: Graph-Based Transfer Learning on Trusses

2021 ◽  
pp. 1-12
Author(s):  
Eamon Whalen ◽  
Caitlin Mueller

Abstract Surrogate models are often employed to speed up engineering design optimization; however, they typically require that all training data conform to the same parametrization (e.g. design variables), limiting design freedom and prohibiting the reuse of historical data. In response, this paper proposes Graph-based Surrogate Models (GSMs) for space frame structures. The GSM can accurately predict displacement fields from static loads given the structure's geometry as input, enabling training across multiple parametrizations. GSMs build upon recent advancements in geometric deep learning which have led to the ability to learn on undirected graphs: a natural representation for space frames. To further promote flexible surrogate models, the paper explores transfer learning within the context of engineering design, and demonstrates positive knowledge transfer across data sets of different topologies, complexities, loads and applications, resulting in more flexible and data-efficient surrogate models for space frame structures.

Author(s):  
Xiangxue Zhao ◽  
Zhimin Xi ◽  
Hongyi Xu ◽  
Ren-Jye Yang

Model bias can be normally modeled as a regression model to predict potential model errors in the design space with sufficient training data sets. Typically, only continuous design variables are considered since the regression model is mainly designed for response approximation in a continuous space. In reality, many engineering problems have discrete design variables mixed with continuous design variables. Although the regression model of the model bias can still approximate the model errors in various design/operation conditions, accuracy of the bias model degrades quickly with the increase of the discrete design variables. This paper proposes an effective model bias modeling strategy to better approximate the potential model errors in the design/operation space. The essential idea is to firstly determine an optimal base model from all combination models derived from discrete design variables, then allocate majority of the bias training samples to this base model, and build relationships between the base model and other combination models. Two engineering examples are used to demonstrate that the proposed approach possesses better bias modeling accuracy compared to the traditional regression modeling approach. Furthermore, it is shown that bias modeling combined with the baseline simulation model can possess higher model accuracy compared to the direct meta-modeling approach using the same amount of training data sets.


2021 ◽  
Author(s):  
Kun Wang ◽  
Christopher Johnson ◽  
Kane Bennett ◽  
Paul Johnson

Abstract Data-driven machine-learning for predicting instantaneous and future fault-slip in laboratory experiments has recently progressed markedly due to large training data sets. In Earth however, earthquake interevent times range from 10's-100's of years and geophysical data typically exist for only a portion of an earthquake cycle. Sparse data presents a serious challenge to training machine learning models. Here we describe a transfer learning approach using numerical simulations to train a convolutional encoder-decoder that predicts fault-slip behavior in laboratory experiments. The model learns a mapping between acoustic emission histories and fault-slip from numerical simulations, and generalizes to produce accurate results using laboratory data. Notably slip-predictions markedly improve using the simulation-data trained-model and training the latent space using a portion of a single laboratory earthquake-cycle. The transfer learning results elucidate the potential of using models trained on numerical simulations and fine-tuned with small geophysical data sets for potential applications to faults in Earth.


Images generated from a variety of sources and foundations today can pose difficulty for a user to interpret similarity in them or analyze them for further use because of their segmentation policies. This unconventionality can generate many errors, because of which the previously used traditional methodologies such as supervised learning techniques less resourceful, which requires huge quantity of labelled training data which mirrors the desired target data. This paper thus puts forward the mechanism of an alternative technique i.e. transfer learning to be used in image diagnosis so that efficiency and accuracy among images can be achieved. This type of mechanism deals with variation in the desired and actual data used for training and the outlier sensitivity, which ultimately enhances the predictions by giving better results in various areas, thus leaving the traditional methodologies behind. The following analysis further discusses about three types of transfer classifiers which can be applied using only small volume of training data sets and their contrast with the traditional method which requires huge quantities of training data having attributes with slight changes. The three different separators were compared amongst them and also together from the traditional methodology being used for a very common application used in our daily life. Also, commonly occurring problems such as the outlier sensitivity problem were taken into consideration and measures were taken to recognise and improvise them. On further research it was observed that the performance of transfer learning exceeds that of the conventional supervised learning approaches being used for small amount of characteristic training data provided reducing the stratification errors to a great extent


1991 ◽  
Vol 6 (4) ◽  
pp. 257-265
Author(s):  
Yona Friedman

Space frame structures can be used both in industrial countries and in countries where labor is inexpensive. Such frameworks can be used as “containing structures” wherein the void between bars is converted into usable space. Frameworks containing usable spaces can span over large areas on the ground, also usable. “Spatial urbanism” thus consists of a rigid and airy framework forming patterns of easily transformable volumes. The framework is raised high above the ground and the ground itself is used for commercial, cultural and business, purposes, circulation, and green areas for which sunlight is provided by the gaps in the framework. The final townscape is arrived at through multiple decisions involving the population. This townscape is called “mobile architecture” and is the most democratic and convivial form of architecture possible.


This research is aimed to achieve high-precision accuracy and for face recognition system. Convolution Neural Network is one of the Deep Learning approaches and has demonstrated excellent performance in many fields, including image recognition of a large amount of training data (such as ImageNet). In fact, hardware limitations and insufficient training data-sets are the challenges of getting high performance. Therefore, in this work the Deep Transfer Learning method using AlexNet pre-trained CNN is proposed to improve the performance of the face-recognition system even for a smaller number of images. The transfer learning method is used to fine-tuning on the last layer of AlexNet CNN model for new classification tasks. The data augmentation (DA) technique also proposed to minimize the over-fitting problem during Deep transfer learning training and to improve accuracy. The results proved the improvement in over-fitting and in performance after using the data augmentation technique. All the experiments were tested on UTeMFD, GTFD, and CASIA-Face V5 small data-sets. As a result, the proposed system achieved a high accuracy as 100% on UTeMFD, 96.67% on GTFD, and 95.60% on CASIA-Face V5 in less than 0.05 seconds of recognition time.


Author(s):  
Valentina Franzoni ◽  
Giulio Biondi ◽  
Damiano Perri ◽  
Osvaldo Gervasi

The paper concludes the first research on mouth-based Emotion Recognition (ER), adopting a Transfer Learning (TL) approach. Transfer Learning results paramount for mouth-based emotion ER, because a few data sets are available, and most of them include emotional expressions simulated by actors, instead of adopting a real-world categorization. Using TL we can use fewer training data than training a whole network from scratch, thus more efficiently fine-tuning the network with emotional data and improving the convolutional neural network accuracy in the desired domain. The proposed approach aims at improving the Emotion Recognition dynamically, taking into account not only new scenarios but also modified situations with respect to the initial training phase, because the image of the mouth can be available even when the whole face is visible only in an unfavourable perspective. Typical applications include automated supervision of bedridden critical patients in an healthcare management environment, or portable applications supporting disabled users having difficulties in seeing or recognizing facial emotions. This work takes advantage from previous preliminary works on mouth-based emotion recognition using CNN deep-learning, and has the further benefit of testing and comparing a set of networks on large data sets for face-based emotion recognition well known in literature. The final result is not directly comparable with works on full-face ER, but valorizes the significance of mouth in emotion recognition, obtaining consistent performances on the visual emotion recognition domain.


2020 ◽  
Vol 10 (2) ◽  
pp. 484-488
Author(s):  
Lifang Peng ◽  
Bin Huang ◽  
Kefu Chen ◽  
Leyuan Zhou

To recognize epileptic EEG signals, traditional clustering algorithms often need to satisfy three conditions to obtain better clustering results. The first condition is that the algorithm must not be sensitive to noise. The second condition is that the data set must be sufficient. The third condition is that the training data set and the testing data set must follow the same distribution. However, in actual applications, there are few data sets that are free of noise and have sufficient data volume. To address the effects of insufficient data sets and noise on clustering, this paper introduces fuzzy membership and transfer learning mechanisms based on K-plane clustering (KPC) and proposes a fuzzy KPC algorithm based on transfer learning (TFKPC). To improve the clustering effect, the TFKPC algorithm uses the knowledge summarized by the historical domain to guide the clustering process of the current (target) domain when the information is insufficient. In addition, the influence of noise on the clustering result is reduced by introducing fuzzy membership. Experiments show that the TFKPC algorithm proposed in this paper has a better clustering effect in the Epileptic Seizure Recognition Data Set than other comparison methods.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Xinhai Chen ◽  
Chunye Gong ◽  
Qian Wan ◽  
Liang Deng ◽  
Yunbo Wan ◽  
...  

AbstractDeep neural networks (DNNs) have recently shown great potential in solving partial differential equations (PDEs). The success of neural network-based surrogate models is attributed to their ability to learn a rich set of solution-related features. However, learning DNNs usually involves tedious training iterations to converge and requires a very large number of training data, which hinders the application of these models to complex physical contexts. To address this problem, we propose to apply the transfer learning approach to DNN-based PDE solving tasks. In our work, we create pairs of transfer experiments on Helmholtz and Navier-Stokes equations by constructing subtasks with different source terms and Reynolds numbers. We also conduct a series of experiments to investigate the degree of generality of the features between different equations. Our results demonstrate that despite differences in underlying PDE systems, the transfer methodology can lead to a significant improvement in the accuracy of the predicted solutions and achieve a maximum performance boost of 97.3% on widely used surrogate models.


Sign in / Sign up

Export Citation Format

Share Document