Research on optimization method of deep neural network

Author(s):  
Pengfei Liu ◽  
Huaici Zhao ◽  
Feidao Cao
Geofluids ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Lihui Tang ◽  
Junjian Li ◽  
Wenming Lu ◽  
Peiqing Lian ◽  
Hao Wang ◽  
...  

A well control optimization method is a key technology to adjust the flow direction of waterflooding and improve the effect of oilfield development. The existing well control optimization method is mainly based on optimization algorithms and numerical simulators. In the face of larger models, longer optimization periods, or reservoir models with a large number of optimized wells, there are many optimization variables, which will cause algorithm convergence difficulties and optimization costs. The application effect is not good because of the problems of time length, few comparison schemes, and only fixed control frequency. This paper proposes a new method of a well control optimization method based on a multi-input deep neural network. This method takes the production history data of the reservoir as the main input and the saturation field as the auxiliary input and establishes a multi-input deep neural network for learning, forming a production dynamic prediction model instead of conventional numerical simulators. Based on the production dynamic prediction model, a series of model generation, production prediction, comparison, and optimization are carried out to find the best production plan of the reservoir. The calculation results of the examples show that (1) compared with the single-input production dynamic prediction model, the production dynamic prediction model based on multiple inputs has better prediction accuracy, and the results are close to the calculation results of the conventional numerical simulator; (2) the well control optimization method based on the multiple-input deep neural network has a fast optimization speed, with many comparison schemes and good optimization effect.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jiahui Zhang ◽  
Xinhao Yang ◽  
Ke Zhang ◽  
Chenrui Wen

An adaptive clamping method (SGD-MS) based on the radius of curvature is designed to alleviate the local optimal oscillation problem in deep neural network, which combines the radius of curvature of the objective function and the gradient descent of the optimizer. The radius of curvature is considered as the threshold to separate the momentum term or the future gradient moving average term adaptively. In addition, on this basis, we propose an accelerated version (SGD-MA), which further improves the convergence speed by using the method of aggregated momentum. Experimental results on several datasets show that the proposed methods effectively alleviate the local optimal oscillation problem and greatly improve the convergence speed and accuracy. A novel parameter updating algorithm is also provided in this paper for deep neural network.


Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Sign in / Sign up

Export Citation Format

Share Document