scholarly journals Prediction Model of Hot Metal Silicon Content Based on Improved GA-BPNN

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zeqian Cui ◽  
Yang Han ◽  
Chaomeng Lu ◽  
Yafeng Wu ◽  
Mansheng Chu

The inconsistency of the detection period of blast furnace data and the large time delay of key parameters make the prediction of the hot metal silicon content face huge challenges. Aiming at the problem that the hot metal silicon content is not consistent with the detection period of time series of multiple control parameters, the cubic spline interpolation fitting model was used to realize the data integration of multiple detection periods. The large time delay of the blast furnace iron making process was analyzed. Moreover, Spearman analysis was combined with the weighted moving average method to optimize the data set of silicon content prediction. Aiming at the problem of low prediction accuracy of the ordinary neural network model, genetic algorithm was used to optimize parameters on the BP neural network model to improve the convergence speed of the model to achieve global optimization. Combined with the autocorrelation analysis of the hot metal silicon content, a modified model for the prediction of hot metal silicon content based on error analysis was proposed to further improve the accuracy of the prediction. The model comprehensively considers problems such as data detection inconsistency, large time delay, and inaccuracy of prediction results. Its average absolute error is 0.05009, which can be used in actual production.

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1514
Author(s):  
Seung-Ho Lim ◽  
WoonSik William Suh ◽  
Jin-Young Kim ◽  
Sang-Young Cho

The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Bo Liu ◽  
Qilin Wu ◽  
Yiwen Zhang ◽  
Qian Cao

Pruning is a method of compressing the size of a neural network model, which affects the accuracy and computing time when the model makes a prediction. In this paper, the hypothesis that the pruning proportion is positively correlated with the compression scale of the model but not with the prediction accuracy and calculation time is put forward. For testing the hypothesis, a group of experiments are designed, and MNIST is used as the data set to train a neural network model based on TensorFlow. Based on this model, pruning experiments are carried out to investigate the relationship between pruning proportion and compression effect. For comparison, six different pruning proportions are set, and the experimental results confirm the above hypothesis.


Author(s):  
A. Saravanan ◽  
J. Jerald ◽  
A. Delphin Carolina Rani

AbstractThe objective of the paper is to develop a new method to model the manufacturing cost–tolerance and to optimize the tolerance values along with its manufacturing cost. A cost–tolerance relation has a complex nonlinear correlation among them. The property of a neural network makes it possible to model the complex correlation, and the genetic algorithm (GA) is integrated with the best neural network model to optimize the tolerance values. The proposed method used three types of neural network models (multilayer perceptron, backpropagation network, and radial basis function). These network models were developed separately for prismatic and rotational parts. For the construction of network models, part size and tolerance values were used as input neurons. The reference manufacturing cost was assigned as the output neuron. The qualitative production data set was gathered in a workshop and partitioned into three files for training, testing, and validation, respectively. The architecture of the network model was identified based on the best regression coefficient and the root-mean-square-error value. The best network model was integrated into the GA, and the role of genetic operators was also studied. Finally, two case studies from the literature were demonstrated in order to validate the proposed method. A new methodology based on the neural network model enables the design and process planning engineers to propose an intelligent decision irrespective of their experience.


2021 ◽  
Vol 252 ◽  
pp. 02025
Author(s):  
Wang Gao-peng ◽  
Yan Zhen-yu ◽  
Zhai Hai-peng ◽  
Zheng Rui-ji

The stability of blast furnace temperature is an important condition to ensure the efficient production of hot metal. Accurate prediction of silicon content in hot metal is of great significance to the control of blast furnace temperature in iron and steel plants. At present, the accuracy of most silicon prediction models can only be guaranteed when the furnace condition is stable. However, due to many factors affecting the silicon content in hot metal of blast furnace, and there are large noises, large delays and large fluctuations in the data, the previous prediction results are of limited guiding significance to the actual production. In this paper, combined with the actual situation, the convolution neural network is used to extract the furnace condition characteristics, and then combined with the attention mechanism and the IndRNN model to get the prediction results, so that the prediction can better adapt to the fluctuating data set. The experimental results show that the prediction error of this model is lower than that of other models, which provides a new solution for the research of silicon content in hot metal of blast furnace.


2009 ◽  
Vol 2009 ◽  
pp. 1-7
Author(s):  
S. N. Naikwad ◽  
S. V. Dudul

A focused time lagged recurrent neural network (FTLR NN) with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.


2020 ◽  
Vol 7 (1) ◽  
pp. 29-36
Author(s):  
Ngô Quốc Dũng ◽  
Lê Văn Hoàng ◽  
Nguyễn Huy Trung

 Tóm tắt— Trong bài báo này, nhóm tác giả đề xuất một phương pháp phát hiện mã độc IoT botnet dựa trên đồ thị PSI (Printable String Information)  sử dụng mạng nơ-ron tích chập (Convolutional Neural Network - CNN). Thông qua việc phân tích đặc tính của Botnet trên các thiết bị IoT, phương pháp đề xuất xây dựng đồ thị để thể hiện các mối liên kết giữa các PSI, làm đầu vào cho mô hình mạng nơ-ron CNN phân lớp. Kết quả thực nghiệm trên bộ dữ liệu 10033 tập tin ELF gồm 4002 mẫu mã độc IoT botnet và 6031 tập tin lành tính cho thấy phương pháp đề xuất đạt độ chính xác (accuracy) và độ đo F1 lên tới 98,1%. Abstract— In this paper, the authors propose a method for detecting IoT botnet malware based on PSI graphs using Convolutional Neural Network (CNN). Through analyzing the characteristics of Botnet on IoT devices, the proposed method construct the graph to show the relations between PSIs, as input for the CNN neural network model. Experimental results on the 10033 data set of ELF files including 4002 IoT botnet malware samples and 6031 benign files show Accuracy and F1-score up to 98.1%. 


Sign in / Sign up

Export Citation Format

Share Document