Optimization of Reduced Kinetic Models for Reactive Flow Simulations

Author(s):  
P. Gokulakrishnan ◽  
R. Joklik ◽  
D. Viehe ◽  
A. Trettel ◽  
E. Gonzalez-Juez ◽  
...  

A robust optimization scheme, known as rkmGen, for reaction rate parameter estimation has been developed for the generation of reduced kinetics models of practical interest for reactive flow simulations. It employs a stochastic optimization algorithm known as Simulated Annealing, and is implemented in C++ and coupled with Cantera, a chemical kinetics software package, to automate the reduced kinetic mechanism generation process. Reaction rate parameters in reduced order models can be estimated by optimizing against target data generated from a detailed model or by experiment. Target data may be of several different kinds: ignition delay time, blow-out time, laminar flame speed, species time-history profiles and species reactivity profiles. The software allows for simultaneous optimization against multiple target data sets over a wide range of temperatures, pressures and equivalence ratios. In this paper, a detailed description of the optimization strategy used for the reaction parameter estimation is provided. To illustrate the performance of the software for reduced kinetic development, a number of test cases for various fuels were used: one-step, three-step and four-step global reduced kinetic models for ethylene, Jet-A and methane, respectively, and a fifty-step semi-global reduced kinetic model for methane. The fifty-step semi-global reduced kinetic model was implemented in the Star*CCM+ commercial CFD code to simulate Sandia Flame D using laminar flamelet libraries and compared with the experimental data. Simulations were also performed with the GRI3.0 mechanism for comparisons.

Author(s):  
P. Gokulakrishnan ◽  
R. Joklik ◽  
D. Viehe ◽  
A. Trettel ◽  
E. Gonzalez-Juez ◽  
...  

A robust optimization scheme, known as rkmGen, for reaction rate parameter estimation has been developed for the generation of reduced kinetics models of practical interest for reactive flow simulations. It employs a stochastic optimization algorithm known as simulated annealing (SA), and is implemented in C++ and coupled with Cantera, a chemical kinetics software package, to automate the reduced kinetic mechanism generation process. Reaction rate parameters in reduced order models can be estimated by optimizing against target data generated from a detailed model or by experiment. Target data may be of several different kinds: ignition delay time, blow-out time, laminar flame speed, species time-history profiles, and species reactivity profiles. The software allows for simultaneous optimization against multiple target data sets over a wide range of temperatures, pressures, and equivalence ratios. In this paper, a detailed description of the optimization strategy used for the reaction parameter estimation is provided. To illustrate the performance of the software for reduced kinetic mechanism development, a number of test cases for various fuels were used: one-step, three-step, and four-step global reduced kinetic models for ethylene, Jet-A and methane, respectively, and a 50 step semiglobal reduced kinetic model for methane. The 50 step semiglobal reduced kinetic model was implemented in the Star*CCM+ commercial CFD code to simulate Sandia Flame D using laminar flamelet libraries and compared with the experimental data. Simulations were also performed with the GRI3.0 mechanism for comparisons.


Author(s):  
Santosh Upadhya ◽  
David F. Ollis

AbstractTrichloroethylene conversion in gas-solid photocatalysis ranks as one of the fastest reactions for air purification and treatment. The development of a complete kinetic model for this conversion is important in that it represents, under some conditions, a likely upper limit to conversion photoefficiency. Both reactant concentration and light intensity are known to influence reaction rate, but only two literature reports have explored a sufficiently wide range of concentrations to find that the intensity dependence is coupled to, and not independent of, the reactant concentration. We develop here a simple model which rationalizes this experimental behavior based upon the familiar competition between species for photo-produced holes (h


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 387
Author(s):  
Yiting Liang ◽  
Yuanhua Zhang ◽  
Yonggang Li

A mechanistic kinetic model of cobalt–hydrogen electrochemical competition for the cobalt removal process in zinc hydrometallurgical was proposed. In addition, to overcome the parameter estimation difficulties arising from the model nonlinearities and the lack of information on the possible value ranges of parameters to be estimated, a constrained guided parameter estimation scheme was derived based on model equations and experimental data. The proposed model and the parameter estimation scheme have two advantages: (i) The model reflected for the first time the mechanism of the electrochemical competition between cobalt and hydrogen ions in the process of cobalt removal in zinc hydrometallurgy; (ii) The proposed constrained parameter estimation scheme did not depend on the information of the possible value ranges of parameters to be estimated; (iii) the constraint conditions provided in that scheme directly linked the experimental phenomenon metrics to the model parameters thereby providing deeper insights into the model parameters for model users. Numerical experiments showed that the proposed constrained parameter estimation algorithm significantly improved the estimation efficiency. Meanwhile, the proposed cobalt–hydrogen electrochemical competition model allowed for accurate simulation of the impact of hydrogen ions on cobalt removal rate as well as simulation of the trend of hydrogen ion concentration, which would be helpful for the actual cobalt removal process in zinc hydrometallurgy.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


1989 ◽  
Vol 67 (5) ◽  
pp. 857-861 ◽  
Author(s):  
Shin-Ichi Miyamoto ◽  
Tetsuo Sakka ◽  
Matae Iwasaki

The reaction rate of hydrogen isotope exchange between D2 and H2O catalyzed by platinum plate is studied. The exchange reaction is described with the kinetic model which is the modification of that for the exchange reaction catalyzed by alumina-supported platinum catalyst. For the comparison of experimental results with this model relative amount of the number of sites for hydrogen adsorption was estimated from the initial rate of hydrogen isotope exchange between H2 and D2 on the same surface. The results show that the kinetic model is applicable for the plate catalyst if the number of the sites for hydrogen absorption, which is very sensitive to the surface state of the catalyst, was estimated not from the macroscopic surface area but from our scheme. Keywords: hydrogen isotope exchange reaction, platinum plate as catalyst.


The Analyst ◽  
2015 ◽  
Vol 140 (9) ◽  
pp. 3121-3135
Author(s):  
Fereshteh Emami ◽  
Marcel Maeder ◽  
Hamid Abdollahi

Schematic of intertwined equilibrium-kinetic model at time = 0,1,2…T when both equilibrium and kinetic models are solved explicitly.


Author(s):  
Lionel Roques ◽  
Mickaël D. Chekroun ◽  
Michel Cristofol ◽  
Samuel Soubeyrand ◽  
Michael Ghil

We study parameter estimation for one-dimensional energy balance models with memory (EBMMs) given localized and noisy temperature measurements. Our results apply to a wide range of nonlinear, parabolic partial differential equations with integral memory terms. First, we show that a space-dependent parameter can be determined uniquely everywhere in the PDE's domain of definition D , using only temperature information in a small subdomain E ⊂ D . This result is valid only when the data correspond to exact measurements of the temperature. We propose a method for estimating a model parameter of the EBMM using more realistic, error-contaminated temperature data derived, for example, from ice cores or marine-sediment cores. Our approach is based on a so-called mechanistic-statistical model that combines a deterministic EBMM with a statistical model of the observation process. Estimating a parameter in this setting is especially challenging, because the observation process induces a strong loss of information. Aside from the noise contained in past temperature measurements, an additional error is induced by the age-dating method, whose accuracy tends to decrease with a sample's remoteness in time. Using a Bayesian approach, we show that obtaining an accurate parameter estimate is still possible in certain cases.


Sign in / Sign up

Export Citation Format

Share Document