scholarly journals Stably Accelerating Stiff Quantitative Systems Pharmacology Models: Continuous-Time Echo State Networks as Implicit Machine Learning

2021 ◽  
Author(s):  
Ranjan Anantharaman ◽  
Anas Abdelrehim ◽  
Anand Jain ◽  
Avik Pal ◽  
Danny Sharp ◽  
...  

AbstractQuantitative systems pharmacology (QsP) may need to change in order to accommodate machine learning (ML), but ML may need to change to work for QsP. Here we investigate the use of neural network surrogates of stiff QsP models. This technique reduces and accelerates QsP models by training ML approximations on simulations. We describe how common neural network methodologies, such as residual neural networks, recurrent neural networks, and physics/biologically-informed neural networks, are fundamentally related to explicit solvers of ordinary differential equations (ODEs). Similar to how explicit ODE solvers are unstable on stiff QsP models, we demonstrate how these ML architectures see similar training instabilities. To address this issue, we showcase methods from scientific machine learning (SciML) which combine techniques from mechanistic modeling with traditional deep learning. We describe the continuous-time echo state network (CTESN) as the implicit analogue of ML architectures and showcase its ability to accurately train and predict on these stiff models where other methods fail. We demonstrate the CTESN’s ability to surrogatize a production QsP model, a >1,000 ODE chemical reaction system from the SBML Biomodels repository, and a reaction-diffusion partial differential equation. We showcase the ability to accelerate QsP simulations by up to 56x against the optimized DifferentialEquations.jl solvers while achieving <5% relative error in all of the examples. This shows how incorporating the numerical properties of QsP methods into ML can improve the intersection, and thus presents a potential method for accelerating repeated calculations such as global sensitivity analysis and virtual populations.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


2014 ◽  
Vol 10 (S306) ◽  
pp. 279-287 ◽  
Author(s):  
Michael Hobson ◽  
Philip Graff ◽  
Farhan Feroz ◽  
Anthony Lasenby

AbstractMachine-learning methods may be used to perform many tasks required in the analysis of astronomical data, including: data description and interpretation, pattern recognition, prediction, classification, compression, inference and many more. An intuitive and well-established approach to machine learning is the use of artificial neural networks (NNs), which consist of a group of interconnected nodes, each of which processes information that it receives and then passes this product on to other nodes via weighted connections. In particular, I discuss the first public release of the generic neural network training algorithm, calledSkyNet, and demonstrate its application to astronomical problems focusing on its use in the BAMBI package for accelerated Bayesian inference in cosmology, and the identification of gamma-ray bursters. TheSkyNetand BAMBI packages, which are fully parallelised using MPI, are available athttp://www.mrao.cam.ac.uk/software/.


2021 ◽  
Author(s):  
Wael Alnahari

Abstract In this paper, I proposed an iris recognition system by using deep learning via neural networks (CNN). Although CNN is used for machine learning, the recognition is achieved by building a non-trained CNN network with multiple layers. The main objective of the code the test pictures’ category (aka person name) with a high accuracy rate after having extracted enough features from training pictures of the same category which are obtained from a that I added to the code. I used IITD iris which included 10 iris pictures for 223 people.


2021 ◽  
Author(s):  
Ruslan Chernyshev ◽  
Mikhail Krinitskiy ◽  
Viktor Stepanenko

&lt;p&gt;This work is devoted to development of neural networks for identification of partial differential equations (PDE) solved in the land surface scheme of INM RAS Earth System model (ESM). Atmospheric and climate models are in the top of the most demanding for supercomputing resources among research applications. Spatial resolution and a multitude of physical parameterizations used in ESMs continuously increase. Most of parameters are still poorly constrained, many of them cannot be measured directly. To optimize model calibration time, using neural networks looks a promising approach. Neural networks are already in wide use in satellite imaginary (Su Jeong Lee, et al, 2015; Krinitskiy M. et al, 2018) and for calibrating parameters of land surface models (Yohei Sawada el al, 2019). Neural networks have demonstrated high efficiency in solving conventional problems of mathematical physics (Lucie P. Aarts el al, 2001; Raissi M. et al, 2020).&amp;#160;&lt;/p&gt;&lt;p&gt;We develop a neural networks for optimizing parameters of nonlinear soil heat and moisture transport equation set. For developing we used Python3 based programming tools implemented on GPUs and Ascend platform, provided by Huawei. Because of using hybrid approach combining neural network and classical thermodynamic equations, the major purpose was finding the way to correctly calculate backpropagation gradient of error function, because model trains and is being validated on the same temperature data, while model output is heat equation parameter, which is typically not known. Neural network model has been runtime trained using reference thermodynamic model calculation with prescribed parameters, every next thermodynamic model step has been used for fitting the neural network until it reaches the loss function tolerance.&lt;/p&gt;&lt;p&gt;Literature:&lt;/p&gt;&lt;p&gt;1. &amp;#160; &amp;#160; Aarts, L.P., van der Veer, P. &amp;#8220;Neural Network Method for Solving Partial Differential Equations&amp;#8221;. Neural Processing Letters 14, 261&amp;#8211;271 (2001). https://doi.org/10.1023/A:1012784129883&lt;/p&gt;&lt;p&gt;2. &amp;#160; &amp;#160; Raissi, M., P. Perdikaris and G. Karniadakis. &amp;#8220;Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations.&amp;#8221; ArXiv abs/1711.10561 (2017): n. pag.&lt;/p&gt;&lt;p&gt;3. &amp;#160; &amp;#160; Lee, S.J., Ahn, MH. &amp; Lee, Y. Application of an artificial neural network for a direct estimation of atmospheric instability from a next-generation imager. Adv. Atmos. Sci. 33, 221&amp;#8211;232 (2016). https://doi.org/10.1007/s00376-015-5084-9&lt;/p&gt;&lt;p&gt;4. &amp;#160; &amp;#160; Krinitskiy M, Verezemskaya P, Grashchenkov K, Tilinina N, Gulev S, Lazzara M. Deep Convolutional Neural Networks Capabilities for Binary Classification of Polar Mesocyclones in Satellite Mosaics. Atmosphere. 2018; 9(11):426.&lt;/p&gt;&lt;p&gt;5. &amp;#160; &amp;#160; Sawada, Y.. &amp;#8220;Machine learning accelerates parameter optimization and uncertainty assessment of a land surface model.&amp;#8221; ArXiv abs/1909.04196 (2019): n. pag.&lt;/p&gt;&lt;p&gt;6. &amp;#160; &amp;#160; Shufen Pan et al. Evaluation of global terrestrial evapotranspiration using state-of-the-art approaches in remote sensing, machine learning and land surface modeling. Hydrol. Earth Syst. Sci., 24, 1485&amp;#8211;1509 (2020)&lt;/p&gt;&lt;p&gt;7. &amp;#160; &amp;#160; Chaney, Nathaniel &amp; Herman, Jonathan &amp; Ek, M. &amp; Wood, Eric. (2016). Deriving Global Parameter Estimates for the Noah Land Surface Model using FLUXNET and Machine Learning: Improving Noah LSM Parameters. Journal of Geophysical Research: Atmospheres. 121. 10.1002/2016JD024821.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;


2011 ◽  
pp. 941-955
Author(s):  
Masanori Goka ◽  
Kazuhiro Ohkura

Artificial evolution has been considered as a promising approach for coordinating the controller of an autonomous mobile robot. However, it is not yet established whether artificial evolution is also effective in generating collective behaviour in a multi-robot system (MRS). In this study, two types of evolving artificial neural networks are utilized in an MRS. The first is the evolving continuous time recurrent neural network, which is used in the most conventional method, and the second is the topology and weight evolving artificial neural networks, which is used in the noble method. Several computer simulations are conducted in order to examine how the artificial evolution can be used to coordinate the collective behaviour in an MRS.


Author(s):  
Arunaben Prahladbhai Gurjar ◽  
Shitalben Bhagubhai Patel

The new era of the world uses artificial intelligence (AI) and machine learning. The combination of AI and machine learning is called artificial neural network (ANN). Artificial neural network can be used as hardware or software-based components. Different topology and learning algorithms are used in artificial neural networks. Artificial neural network works similarly to the functionality of the human nervous system. ANN is working as a nonlinear computing model based on activities performed by human brain such as classification, prediction, decision making, visualization just by considering previous experience. ANN is used to solve complex, hard-to-manage problems by accruing knowledge about the environment. There are different types of artificial neural networks available in machine learning. All types of artificial neural networks work based of mathematical operation and require a set of parameters to get results. This chapter gives overview on the various types of neural networks like feed forward, recurrent, feedback, classification-predication.


2022 ◽  
pp. 1-30
Author(s):  
Arunaben Prahladbhai Gurjar ◽  
Shitalben Bhagubhai Patel

The new era of the world uses artificial intelligence (AI) and machine learning. The combination of AI and machine learning is called artificial neural network (ANN). Artificial neural network can be used as hardware or software-based components. Different topology and learning algorithms are used in artificial neural networks. Artificial neural network works similarly to the functionality of the human nervous system. ANN is working as a nonlinear computing model based on activities performed by human brain such as classification, prediction, decision making, visualization just by considering previous experience. ANN is used to solve complex, hard-to-manage problems by accruing knowledge about the environment. There are different types of artificial neural networks available in machine learning. All types of artificial neural networks work based of mathematical operation and require a set of parameters to get results. This chapter gives overview on the various types of neural networks like feed forward, recurrent, feedback, classification-predication.


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6491
Author(s):  
Le Zhang ◽  
Jeyan Thiyagalingam ◽  
Anke Xue ◽  
Shuwen Xu

Classification of clutter, especially in the context of shore based radars, plays a crucial role in several applications. However, the task of distinguishing and classifying the sea clutter from land clutter has been historically performed using clutter models and/or coastal maps. In this paper, we propose two machine learning, particularly neural network, based approaches for sea-land clutter separation, namely the regularized randomized neural network (RRNN) and the kernel ridge regression neural network (KRR). We use a number of features, such as energy variation, discrete signal amplitude change frequency, autocorrelation performance, and other statistical characteristics of the respective clutter distributions, to improve the performance of the classification. Our evaluation based on a unique mixed dataset, which is comprised of partially synthetic clutter data for land and real clutter data from sea, offers improved classification accuracy. More specifically, the RRNN and KRR methods offer 98.50% and 98.75% accuracy, outperforming the conventional support vector machine and extreme learning based solutions.


Sign in / Sign up

Export Citation Format

Share Document