Estimating the solubility of different solutes in supercritical CO 2 covering a wide range of operating conditions by using neural network models

2017 ◽  
Vol 125 ◽  
pp. 79-87 ◽  
Author(s):  
Ali Aminian
2017 ◽  
Author(s):  
Charlie W. Zhao ◽  
Mark J. Daley ◽  
J. Andrew Pruszynski

AbstractFirst-order tactile neurons have spatially complex receptive fields. Here we use machine learning tools to show that such complexity arises for a wide range of training sets and network architectures, and benefits network performance, especially on more difficult tasks and in the presence of noise. Our work suggests that spatially complex receptive fields are normatively good given the biological constraints of the tactile periphery.


2000 ◽  
Author(s):  
Arturo Pacheco-Vega ◽  
Mihir Sen ◽  
Rodney L. McClain

Abstract In the current study we consider the problem of accuracy in heat rate estimations from artificial neural network models of heat exchangers used for refrigeration applications. The network configuration is of the feedforward type with a sigmoid activation function and a backpropagation algorithm. Limited experimental measurements from a manufacturer are used to show the capability of the neural network technique in modeling the heat transfer in these systems. Results from this exercise show that a well-trained network correlates the data with errors of the same order as the uncertainty of the measurements. It is also shown that the number and distribution of the training data are linked to the performance of the network when estimating the heat rates under different operating conditions, and that networks trained from few tests may give large errors. A methodology based on the cross-validation technique is presented to find regions where not enough data are available to construct a reliable neural network. The results from three tests show that the proposed methodology gives an upper bound of the estimated error in the heat rates.


Author(s):  
Girija Parthasarathy ◽  
Sunil Menon ◽  
Kurt Richardson ◽  
Ahsan Jameel ◽  
Dawn McNamee ◽  
...  

In engine structural life computations, it is common practice to assign a life of certain number of start-stop cycles based on a standard flight or mission. This is done during design through detailed calculations of stresses and temperatures for a standard flight, and the use of material property and failure models. The limitation of the design phase stress and temperature calculations is that they cannot take into account actual operating temperatures and stresses. This limitation results in either very conservative life estimates and subsequent wastage of good components, or in catastrophic damage because of highly aggressive operational conditions which were not accounted for in design. In order to improve significantly the accuracy of the life prediction, the component temperatures and stresses need to be computed for actual operating conditions. However, thermal and stress models are very detailed and complex, and it could take on the order of a few hours to complete a stress and temperature simulation of critical components for a flight. The objective of this work is to develop dynamic neural network models, that would enable us to compute the stresses and temperatures at critical locations, in orders of magnitude less computation time than required by more detailed thermal and stress models. This work expands on the work done previously [1] where a linear system identification approach was developed. The current paper describes the development of a neural network model and the temperature results achieved in comparison with the original models for Honeywell turbine and compressor components. Given certain inputs such as engine speed and gas temperatures for the flight, the models compute the component critical location temperatures for the same flight in a very small fraction of time it would take the original thermal model to compute.


2020 ◽  
Author(s):  
Yinghao Li ◽  
Robert Kim ◽  
Terrence J. Sejnowski

SummaryRecurrent neural network (RNN) model trained to perform cognitive tasks is a useful computational tool for understanding how cortical circuits execute complex computations. However, these models are often composed of units that interact with one another using continuous signals and overlook parameters intrinsic to spiking neurons. Here, we developed a method to directly train not only synaptic-related variables but also membrane-related parameters of a spiking RNN model. Training our model on a wide range of cognitive tasks resulted in diverse yet task-specific synaptic and membrane parameters. We also show that fast membrane time constants and slow synaptic decay dynamics naturally emerge from our model when it is trained on tasks associated with working memory (WM). Further dissecting the optimized parameters revealed that fast membrane properties and slow synaptic dynamics are important for encoding stimuli and WM maintenance, respectively. This approach offers a unique window into how connectivity patterns and intrinsic neuronal properties contribute to complex dynamics in neural populations.


1995 ◽  
Vol 38 (4) ◽  
pp. 483-495 ◽  
Author(s):  
William Sims Bainbridge

This paper applies neural network technology, a standard approach in computer science that has been unaccountably ignored by sociologists, to the problem of developing rigorous sociological theories. A simulation program employing a “varimax” model of human learning and decision-making models central elements of the Stark-Bainbridge theory of religion. Individuals in a micro-society of 24 simulated people learn which categories of potential exchange partners to seek for each of four material rewards which in fact can be provided by other actors in the society. However, when they seek eternal life, they are unable to find suitable human exchange partners who can provide it to them, so they postulate the existence of supernatural exchange partners as substitutes. The explanation of how the particular neural net works, including reference to modulo arithmetic, introduces some aspects of this new technology to sociology, and this paper invites readers to explore the wide range of other neural net techniques that may be of value for social scientists


Symmetry ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 1756
Author(s):  
Zhe Li ◽  
Mieradilijiang Maimaiti ◽  
Jiabao Sheng ◽  
Zunwang Ke ◽  
Wushour Silamu ◽  
...  

The task of dialogue generation has attracted increasing attention due to its diverse downstream applications, such as question-answering systems and chatbots. Recently, the deep neural network (DNN)-based dialogue generation models have achieved superior performance against conventional models utilizing statistical machine learning methods. However, despite that an enormous number of state-of-the-art DNN-based models have been proposed, there lacks detailed empirical comparative analysis for them on the open Chinese corpus. As a result, relevant researchers and engineers might find it hard to get an intuitive understanding of the current research progress. To address this challenge, we conducted an empirical study for state-of-the-art DNN-based dialogue generation models in various Chinese corpora. Specifically, extensive experiments were performed on several well-known single-turn and multi-turn dialogue corpora, including KdConv, Weibo, and Douban, to evaluate a wide range of dialogue generation models that are based on the symmetrical architecture of Seq2Seq, RNNSearch, transformer, generative adversarial nets, and reinforcement learning respectively. Moreover, we paid special attention to the prevalent pre-trained model for the quality of dialogue generation. Their performances were evaluated by four widely-used metrics in this area: BLEU, pseudo, distinct, and rouge. Finally, we report a case study to show example responses generated by these models separately.


Author(s):  
Girija Parthasarathy ◽  
Sunil Menon ◽  
Kurt Richardson ◽  
Ahsan Jameel ◽  
Dawn McNamee ◽  
...  

In engine structural life computations, it is common practice to assign a life of certain number of start-stop cycles based on a standard flight or mission. This is done during design through detailed calculations of stresses and temperatures for a standard flight, and the use of material property and failure models. The limitation of the design phase stress and temperature calculations is that they cannot take into account actual operating temperatures and stresses. This limitation results in either very conservative life estimates and subsequent wastage of good components or in catastrophic damage because of highly aggressive operational conditions, which were not accounted for in design. In order to improve significantly the accuracy of the life prediction, the component temperatures and stresses need to be computed for actual operating conditions. However, thermal and stress models are very detailed and complex, and it could take on the order of a few hours to complete a stress and temperature simulation of critical components for a flight. The objective of this work is to develop dynamic neural network models that would enable us to compute the stresses and temperatures at critical locations, in orders of magnitude less computation time than required by more detailed thermal and stress models. The current paper describes the development of a neural network model and the temperature results achieved in comparison with the original models for Honeywell turbine and compressor components. Given certain inputs such as engine speed and gas temperatures for the flight, the models compute the component critical location temperatures for the same flight in a very small fraction of time it would take the original thermal model to compute.


2019 ◽  
Author(s):  
Yue Liu ◽  
Marc W. Howard

AbstractSequential neural activity has been observed in many parts of the brain and has been proposed as a neural mechanism for memory. The natural world expresses temporal relationships at a wide range of scales. Because we cannot know the relevant scales a priori it is desirable that memory, and thus the generated sequences, are scale-invariant. Although recurrent neural network models have been proposed as a mechanism for generating sequences, the requirements for scale-invariant sequences are not known. This paper reports the constraints that enable a linear recurrent neural network model to generate scale-invariant sequential activity. A straightforward eigendecomposition analysis results in two independent conditions that are required for scaleinvariance for connectivity matrices with real, distinct eigenvalues. First, the eigenvalues of the network must be geometrically spaced. Second, the eigenvectors must be related to one another via translation. These constraints are easily generalizable for matrices that have complex and distinct eigenvalues. Analogous albeit less compact constraints hold for matrices with degenerate eigenvalues. These constraints, along with considerations on initial conditions, provide a general recipe to build linear recurrent neural networks that support scale-invariant sequential activity.


2020 ◽  
Author(s):  
Pablo Martínez-Cañada ◽  
Torbjørn V. Ness ◽  
Gaute T. Einevoll ◽  
Tommaso Fellin ◽  
Stefano Panzeri

AbstractThe electroencephalogram (EEG) is one of the main tools for non-invasively studying brain function and dysfunction. To better interpret EEGs in terms of neural mechanisms, it is important to compare experimentally recorded EEGs with the output of neural network models. Most current neural network models use networks of simple point neurons. They capture important properties of cortical dynamics, and are numerically or analytically tractable. However, point neuron networks cannot directly generate an EEG, since EEGs are generated by spatially separated transmembrane currents. Here, we explored how to compute an accurate approximation of the EEG with a combination of quantities defined in point-neuron network models. We constructed several different candidate approximations (or proxies) of the EEG that can be computed from networks of leaky integrate-and-fire (LIF) point neurons, such as firing rates, membrane potentials, and specific combinations of synaptic currents. We then evaluated how well each proxy reconstructed a realistic ground-truth EEG obtained when the synaptic input currents of the LIF network were fed into a three-dimensional (3D) network model of multi-compartmental neurons with realistic cell morphologies. We found that a new class of proxies, based on an optimized linear combination of time-shifted AMPA and GABA currents, provided the most accurate estimate of the EEG over a wide range of network states of the LIF point-neuron network. The new linear proxies explained most of the variance (85-95%) of the ground-truth EEG for a wide range of cell morphologies, distributions of presynaptic inputs, and position of the recording electrode. Non-linear proxies, obtained using a convolutional neural network (CNN) to predict the EEG from synaptic currents, increased proxy performance by a further 2-8%. Our proxies can be used to easily calculate a biologically realistic EEG signal directly from point-neuron simulations and thereby allow a quantitative comparison between computational models and experimental EEG recordings.Author summaryNetworks of point neurons are widely used to model neural dynamics. Their output, however, cannot be directly compared to the electroencephalogram (EEG), which is one of the most used tools to non-invasively measure brain activity. To allow a direct integration between neural network theory and empirical EEG data, here we derived a new mathematical expression, termed EEG proxy, which estimates with high accuracy the EEG based simply on the variables available from simulations of point-neuron network models. To compare and validate these EEG proxies, we computed a realistic ground-truth EEG produced by a network of simulated neurons with realistic 3D morphologies that receive the same spikes of the simpler network of point neurons. The new obtained EEG proxies outperformed previous approaches and worked well under a wide range of simulated configurations of cell morphologies, distribution of presynaptic inputs, and position of the recording electrode. The new proxies approximated well both EEG spectra and EEG evoked potentials. Our work provides important mathematical tools that allow a better interpretation of experimentally measured EEGs in terms of neural models of brain function.


The ICL Distributed Array Processor and the Meiko Computing Surface have been successfully applied to a wide range of scientific problems. I give an overview of selected applications from experimental data analysis, molecular dynamics and Monte Carlo simulation, cellular automata for fluid flow, neural network models, protein sequencing and NMR imaging. I expose the problems and advantages of implementations on the two architectures, and discuss the general conclusions which one can draw from experience so far.


Sign in / Sign up

Export Citation Format

Share Document