scholarly journals Prediction of Methanol Production in a Carbon Dioxide Hydrogenation Plant Using Neural Networks

Energies ◽  
2021 ◽  
Vol 14 (13) ◽  
pp. 3965
Author(s):  
Daniel Chuquin-Vasco ◽  
Francis Parra ◽  
Nelson Chuquin-Vasco ◽  
Juan Chuquin-Vasco ◽  
Vanesa Lo-Iacono-Ferreira

The objective of this research was to design a neural network (ANN) to predict the methanol flux at the outlet of a carbon dioxide dehydrogenation plant. For the development of the ANN, a database was generated, in the open-source simulation software “DWSIM”, from the validation of a process described in the literature. The sample consists of 133 data pairs with four inputs: reactor pressure and temperature, mass flow of carbon dioxide and hydrogen, and one output: flow of methanol. The ANN was designed using 12 neurons in the hidden layer and it was trained with the Levenberg–Marquardt algorithm. In the training, validation and testing phase, a global mean square (RMSE) value of 0.0085 and a global regression coefficient R of 0.9442 were obtained. The network was validated through an analysis of variance (ANOVA), where the p-value for all cases was greater than 0.05, which indicates that there are no significant differences between the observations and those predicted by the ANN. Therefore, the designed ANN can be used to predict the methanol flow at the exit of a dehydrogenation plant and later for the optimization of the system.

2020 ◽  
Vol 60 (11) ◽  
pp. 1244-1250
Author(s):  
M. V. Magomedova ◽  
A. V. Starozhitskaya ◽  
M. I. Afokin ◽  
I. V. Perov ◽  
M. A. Kipnis ◽  
...  

ACS Catalysis ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 2121-2133
Author(s):  
Chao Zhang ◽  
Chenxi Cao ◽  
Yulong Zhang ◽  
Xianglin Liu ◽  
Jing Xu ◽  
...  

2021 ◽  
Author(s):  
Matthew Quesne ◽  
C. Richard A. Catlow ◽  
Nora Henriette De Leeuw

We present several in silico insights into the MAX-phase of early transition metal silicon carbides and explore how these affect carbon dioxide hydrogenation. Periodic desity functional methodology is applied to...


2011 ◽  
Vol 14 (4) ◽  
pp. E9 ◽  
Author(s):  
Paul K. Addo ◽  
Robert L. Arechederra ◽  
Abdul Waheed ◽  
James D. Shoemaker ◽  
William S. Sly ◽  
...  

2016 ◽  
Vol 18 (9) ◽  
pp. 6763-6772 ◽  
Author(s):  
Manuel Corva ◽  
Zhijing Feng ◽  
Carlo Dri ◽  
Federico Salvador ◽  
Paolo Bertoch ◽  
...  

Stable hydrocarbon surface species in the carbon dioxide hydrogenation reaction were identified on Ir(111) under near-ambient pressure conditions.


2017 ◽  
Vol 21 ◽  
pp. 132-138 ◽  
Author(s):  
D. Bellotti ◽  
M. Rivarolo ◽  
L. Magistri ◽  
A.F. Massardo

2019 ◽  
Vol 8 (4) ◽  
pp. 2349-2353

Backpropagation, as a learning method in artificial neural networks, is widely used to solve problems in various fields of life, including education. In this field, backpropagation is used to predict the validity of questions, student achievement, and the new student admission system. The performance of the training algorithm is said to be optimal can be seen from the error (MSE) generated by the network. The smaller the error produced, the more optimal the performance of the algorithm. Based on previous studies, we got information that the most optimal training algorithm based on the smallest error was Levenberg–Marquardt with an average MSE = 0.001 in the 5-10-1 model with a level of α = 5%. In this study, we test the Levenberg-Marquardt algorithm on 8, 12, 14, 16, 19 neurons in hidden layers. This algorithm is tested at the learning rate (LR) = 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1. This study uses mixed-method, namely development with quantitative and qualitative testing using ANOVA and correlation analysis. The research uses random data with ten neurons in the input layer and one neuron in the output layer. Based on ANOVA analysis of the five variations in the number of neurons in the hidden layer, the results showed that with α = 5% as previous research, the Levenberg–Marquardt algorithm produced the smallest MSE of 0.00019584038 ± 0.000239300998. The number of neurons in the hidden layer that reaches this MSE is 16 neurons at the level of LR = 0.8.


ChemCatChem ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 2324-2324
Author(s):  
Francisco J. Caparrós ◽  
Lluís Soler ◽  
Marta D. Rossell ◽  
Inmaculada Angurell ◽  
Laurent Piccolo ◽  
...  

2020 ◽  
Vol 132 (1) ◽  
Author(s):  
Wensheng Ning ◽  
Bei Li ◽  
Hui Dai ◽  
Shiye Hu ◽  
Xiazhen Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document