Component-based machine learning paradigm for discovering rate-dependent and pressure-sensitive level-set plasticity models

2021 ◽  
pp. 1-13
Author(s):  
Nikolaos Napoleon Vlassis ◽  
Waiching Sun

Abstract Conventionally, neural network constitutive laws for path-dependent elasto-plastic solids are trained via supervised learning performed on recurrent neural network, with the time history of strain as input and the stress as input. However, training neural network to replicate path-dependent constitutive responses require significant more amount of data due to the path dependence. This demand on diverse and abundance of accurate data, as well as the lack of interpretability to guide the data generation process, could become major roadblocks for engineering applications. In this work, we attempt to simplify these training processes and improve the interpretability of the trained models by breaking down the training of material models into multiple supervised machine learning programs for elasticity, initial yielding and hardening laws that can be conducted sequentially. To predict pressure-sensitivity and rate dependence of the plastic responses, we reformulate the Hamliton-Jacobi equation such that the yield function is parametrized in the product space spanned by the principle stress, the accumulated plastic strain and time. To test the versatility of the neural network meta-modeling framework, we conduct multiple numerical experiments where neural networks are trained and validated against (1) data generated from known benchmark models, (2) data obtained from physical experiments and (3) data inferred from homogenizing sub-scale direct numerical simulations of microstructures. The neural network model is also incorporated into an offline FFT-FEM model to improve the efficiency of the multiscale calculations.

2021 ◽  
Author(s):  
WaiChing Sun ◽  
Nikolas Vlassis

<p>This talk will present a machine learning framework that builds interpretable macroscopic surrogate elasto-plasticity models inferred from sub-scale direction numerical simulations (DNS) or experiments with limited data. To circumvent the lack of interpretability of the classical black-box neural network, we introduce a higher-order supervised machine learning technique that generates components of elasto-plastic models such as elasticity functional, yield function, hardening mechanisms, and plastic flow. The geometrical interpretation in the principal stress space allows us to use convexity and smoothness to ensure thermodynamic consistency. The speed function from the Hamilton-Jacobi equation is deduced from the DNS data to formulate hardening and non-associative plastic flow rules governed by the evolution of the low-dimensional descriptors. By incorporating a non-cooperative game that determines the necessary data to calibrate material models, the machine learning generated model is continuously tested, calibrated, and improved as new data guided by the adversarial agents are generated. A graph convolutional neural network is used to deduce low-dimensional descriptors that encodes the evolutional of particle topology under path-dependent deformation and are used to replace internal variables. The resultant constitutive laws can be used in a finite element solver or incorporated as a loss function for the physical-informed neural network run physical simulations.</p>


2020 ◽  
Vol 10 (17) ◽  
pp. 6048 ◽  
Author(s):  
Nedeljko Dučić ◽  
Aleksandar Jovičić ◽  
Srećko Manasijević ◽  
Radomir Radiša ◽  
Žarko Ćojbašić ◽  
...  

This paper presents the application of machine learning in the control of the metal melting process. Metal melting is a dynamic production process characterized by nonlinear relations between process parameters. In this particular case, the subject of research is the production of white cast iron. Two supervised machine learning algorithms have been applied: the neural network and the support vector regression. The goal of their application is the prediction of the amount of alloying additives in order to obtain the desired chemical composition of white cast iron. The neural network model provided better results than the support vector regression model in the training and testing phases, which qualifies it to be used in the control of the white cast iron production.


Research ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Hang Guo ◽  
Ji Wan ◽  
Haobin Wang ◽  
Hanxiang Wu ◽  
Chen Xu ◽  
...  

Handwritten signatures widely exist in our daily lives. The main challenge of signal recognition on handwriting is in the development of approaches to obtain information effectively. External mechanical signals can be easily detected by triboelectric nanogenerators which can provide immediate opportunities for building new types of active sensors capable of recording handwritten signals. In this work, we report an intelligent human-machine interaction interface based on a triboelectric nanogenerator. Using the horizontal-vertical symmetrical electrode array, the handwritten triboelectric signal can be recorded without external energy supply. Combined with supervised machine learning methods, it can successfully recognize handwritten English letters, Chinese characters, and Arabic numerals. The principal component analysis algorithm preprocesses the triboelectric signal data to reduce the complexity of the neural network in the machine learning process. Further, it can realize the anticounterfeiting recognition of writing habits by controlling the samples input to the neural network. The results show that the intelligent human-computer interaction interface has broad application prospects in signature security and human-computer interaction.


Author(s):  
Takuma Oda ◽  
Shih-Wei Chiu ◽  
Takuhiro Yamaguchi

Abstract Objective This study aimed to develop a semi-automated process to convert legacy data into clinical data interchange standards consortium (CDISC) study data tabulation model (SDTM) format by combining human verification and three methods: data normalization; feature extraction by distributed representation of dataset names, variable names, and variable labels; and supervised machine learning. Materials and Methods Variable labels, dataset names, variable names, and values of legacy data were used as machine learning features. Because most of these data are string data, they had been converted to a distributed representation to make them usable as machine learning features. For this purpose, we utilized the following methods for distributed representation: Gestalt pattern matching, cosine similarity after vectorization by Doc2vec, and vectorization by Doc2vec. In this study, we examined five algorithms—namely decision tree, random forest, gradient boosting, neural network, and an ensemble that combines the four algorithms—to identify the one that could generate the best prediction model. Results The accuracy rate was highest for the neural network, and the distribution of prediction probabilities also showed a split between the correct and incorrect distributions. By combining human verification and the three methods, we were able to semi-automatically convert legacy data into the CDISC SDTM format. Conclusion By combining human verification and the three methods, we have successfully developed a semi-automated process to convert legacy data into the CDISC SDTM format; this process is more efficient than the conventional fully manual process.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Shreya Nag ◽  
Nimitha Jammula

The diagnosis of a disease to determine a specific condition is crucial in caring for patients and furthering medical research. The timely and accurate diagnosis can have important implications for both patients and healthcare providers. An earlier diagnosis allows doctors to consider more methods of treatment, allowing them to have a greater flexibility of tailoring their decisions, and ultimately improving the patient’s health. Additionally, a timely detection allows patients to have a greater control over their health and their decisions, allowing them to plan ahead. As advancements in computer science and technology continue to improve, these two factors can play a major role in aiding healthcare providers with medical issues. The emergence of artificial intelligence and machine learning can aid in addressing the challenge of completing timely and accurate diagnosis. The goal of this research work is to design a system that utilizes machine learning and neural network techniques to diagnose chronic kidney disease with more than 90% accuracy based on a clinical data set, and to do a comparative study of the performance of the neural network versus supervised machine learning approaches. Based on the results, all the algorithms performed well in prediction of chronic kidney disease (CKD) with more that 90% accuracy. The neural network system provided the best performance (accuracy = 100%) in prediction of chronic kidney disease in comparison with the supervised Random Forest algorithm (accuracy = 99%) and the supervised Decision Tree algorithm (accuracy = 97%).


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-22
Author(s):  
Yashen Wang ◽  
Huanhuan Zhang ◽  
Zhirun Liu ◽  
Qiang Zhou

For guiding natural language generation, many semantic-driven methods have been proposed. While clearly improving the performance of the end-to-end training task, these existing semantic-driven methods still have clear limitations: for example, (i) they only utilize shallow semantic signals (e.g., from topic models) with only a single stochastic hidden layer in their data generation process, which suffer easily from noise (especially adapted for short-text etc.) and lack of interpretation; (ii) they ignore the sentence order and document context, as they treat each document as a bag of sentences, and fail to capture the long-distance dependencies and global semantic meaning of a document. To overcome these problems, we propose a novel semantic-driven language modeling framework, which is a method to learn a Hierarchical Language Model and a Recurrent Conceptualization-enhanced Gamma Belief Network, simultaneously. For scalable inference, we develop the auto-encoding Variational Recurrent Inference, allowing efficient end-to-end training and simultaneously capturing global semantics from a text corpus. Especially, this article introduces concept information derived from high-quality lexical knowledge graph Probase, which leverages strong interpretability and anti-nose capability for the proposed model. Moreover, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence concept dependence. Experiments conducted on several NLP tasks validate the superiority of the proposed approach, which could effectively infer meaningful hierarchical concept structure of document and hierarchical multi-scale structures of sequences, even compared with latest state-of-the-art Transformer-based models.


Data & Policy ◽  
2021 ◽  
Vol 3 ◽  
Author(s):  
Munisamy Gopinath ◽  
Feras A. Batarseh ◽  
Jayson Beckman ◽  
Ajay Kulkarni ◽  
Sei Jeong

Abstract Focusing on seven major agricultural commodities with a long history of trade, this study employs data-driven analytics to decipher patterns of trade, namely using supervised machine learning (ML), as well as neural networks. The supervised ML and neural network techniques are trained on data until 2010 and 2014, respectively. Results show the high relevance of ML models to forecasting trade patterns in near- and long-term relative to traditional approaches, which are often subjective assessments or time-series projections. While supervised ML techniques quantified key economic factors underlying agricultural trade flows, neural network approaches provide better fits over the long term.


2021 ◽  
Author(s):  
Yuxiang Chen ◽  
Chuanlei Liu ◽  
Yang An ◽  
Yue Lou ◽  
Yang Zhao ◽  
...  

Machine learning and computer-aided approaches significantly accelerate molecular design and discovery in scientific and industrial fields increasingly relying on data science for efficiency. The typical method used is supervised learning which needs huge datasets. Semi-supervised machine learning approaches are effective to train unlabeled data with improved modeling performance, whereas they are limited by the accumulation of prediction errors. Here, to screen solvents for removal of methyl mercaptan, a type of organosulfur impurities in natural gas, we constructed a computational framework by integrating molecular similarity search and active learning methods, namely, molecular active selection machine learning (MASML). This new model framework identifies the optimal molecules set by molecular similarity search and iterative addition to the training dataset. Among all 126,068 compounds in the initial dataset, 3 molecules were identified to be promising for methyl mercaptan (MeSH) capture, including benzylamine (BZA), p-methoxybenzylamine (PZM), and N,N-diethyltrimethylenediamine (DEAPA). Further experiments confirmed the effectiveness of our modeling framework in efficient molecular design and identification for capturing methyl mercaptan, in which DEAPA presents a Henry's law constant 89.4% lower than that of methyl diethanolamine (MDEA).


Terminology ◽  
2022 ◽  
Author(s):  
Ayla Rigouts Terryn ◽  
Véronique Hoste ◽  
Els Lefever

Abstract As with many tasks in natural language processing, automatic term extraction (ATE) is increasingly approached as a machine learning problem. So far, most machine learning approaches to ATE broadly follow the traditional hybrid methodology, by first extracting a list of unique candidate terms, and classifying these candidates based on the predicted probability that they are valid terms. However, with the rise of neural networks and word embeddings, the next development in ATE might be towards sequential approaches, i.e., classifying each occurrence of each token within its original context. To test the validity of such approaches for ATE, two sequential methodologies were developed, evaluated, and compared: one feature-based conditional random fields classifier and one embedding-based recurrent neural network. An additional comparison was added with a machine learning interpretation of the traditional approach. All systems were trained and evaluated on identical data in multiple languages and domains to identify their respective strengths and weaknesses. The sequential methodologies were proven to be valid approaches to ATE, and the neural network even outperformed the more traditional approach. Interestingly, a combination of multiple approaches can outperform all of them separately, showing new ways to push the state-of-the-art in ATE.


Sign in / Sign up

Export Citation Format

Share Document