scholarly journals Self-Learning Controllers in the Oil and Gas Industry

2021 ◽  
Vol 11 (1) ◽  
pp. 18-35
Author(s):  
Dr. Seaar Al-Dabooni ◽  
Hussen Ali Mohammad Alshehab

Recently, solving the optimization-control problems by using artificial intelligence has widelyappeared in the petroleum fields in exploration and production. This paper presents the stateof-the-art reinforcement-learning algorithm applying in the petroleum optimization-controlproblems, which is called a direct heuristic dynamic programming (DHDP). DHDP has twointeractive artificial neural networks, which are the critic network (provider acritique/evaluated signal) and the actor network (provider a control signal). This paper focuseson a generic on-line learning control system in Markov decision process principles.Furthermore, DHDP is a model-free learning design that does not require prior knowledgeabout a dynamic model; therefore, DHDP can be appllied with any petroleum equipment ordevise directly without needed to drive a mathematical model. Moreover, DHDP learns byitself (self-learning) without human intervention via repeating the interaction between anequipment and environment/process. The equipment receives the states of theenvironment/process via sensors, and the algorithm maximizes the reward by selecting thecorrect optimal action (control signal). A quadruple tank system (QTS) is taken as a benchmarktest problem, that the nonlinear model responses close to the real model, for three reasons:First, QTS is widely used in the most petroleum exploration/production fields (entire system orparts), which consists of four tanks and two electrical-pumps with two pressure control valves.Second, QTS is a difficult model to control, which has a limited zone of operating parametersto be stable; therefore, if DHDP controls on QTS by itself, DHDP can control on otherequipment in a fast and optimal manner. Third, QTS is designed with a multi-input-multioutput (MIMO) model for analysis in the real-time nonlinear dynamic system; therefore, theQTS model has a similar model with most MIMO devises in oil and gas field. The overalllearning control system performance is tested and compared with a proportional integralderivative (PID) via MATLAB programming. DHDP provides enhanced performancecomparing with the PID approach with 99.2466% improvement.

2014 ◽  
Vol 910 ◽  
pp. 433-436
Author(s):  
Jiao Yue Liu ◽  
Ji Chang Wang ◽  
Ya Bei Shi ◽  
Ju Qing Yang

In view of current production technology of asphalt concrete mixing plant and technology, a distributed computer control system based on PLC and field bus is developed. The compensation iteration self-learning algorithm is adopted, the compensation amount of learning algorithm of the system is designed and the course program control algorithm is presented. Therefore the precision of the system’s measurement and control is improved and the transmission error is reduced.


2017 ◽  
pp. 62-67
Author(s):  
V. G. Kuznetsov ◽  
O. A. Makarov

At cementing of casing of oil and gas wells during the process of injecting of cement slurry in the casing column the slurry can move with a higher speed than it’s linear injection speed. A break of continuity of fluid flow occurs, what can lead to poor quality isolation of producing formations and shorten the effective life of the well. We need to find some technical solution to stabilize the linear velocity of the cement slurry in the column. This task can be resolved with an automated control system.


2011 ◽  
Vol 38 (7) ◽  
pp. 642-651
Author(s):  
Wen-Qi Wu ◽  
Xiao-Bin ZHENG ◽  
Yong-Chu LIU ◽  
Kai TANG ◽  
Huai-Qiu ZHU

2010 ◽  
Vol 139-141 ◽  
pp. 1889-1893 ◽  
Author(s):  
Peng Fei Wang ◽  
Dian Hua Zhang ◽  
Xu Li ◽  
Jia Wei Liu

In order to improve the flatness of cold rolled strips, strategies of closed loop feedback flatness control and rolling force feed forward control were established respectively, based on actuator efficiency factors. As the basis of flatness control system, efficiencies of flatness actuators provide a quantitative description to the law of flatness control. For the purpose of obtaining accurate efficiency factors matrixes of actuators, a self-learning model of actuator efficiency factors was established. The precision of actuator efficiency factors could be improved continuously by correlative measurement flatness data inputs. Meanwhile, the self-learning model of actuator efficiency factors permits the application of this flatness control for all possible types of actuators and every stand type. The developed flatness control system has been applied to a 1250mm single stand 6-H reversible UCM cold mill. Applications show that the flatness control system based on actuator efficiency factors is capable to obtain good flatness.


2012 ◽  
Vol 619 ◽  
pp. 302-305
Author(s):  
Hong Yan Wang ◽  
Wen Sheng Xiao ◽  
Xiu Juan Lin ◽  
Xian Feng Wang

Considering the pollution on the environment using dynamite source in oil and gas exploration, harm and damage to people and building, the vehicle mounted hammer source which can replace dynamite source is presented. This paper describes briefly the basic structure and working principles of the vehicle mounted hammer source. A typical pneumatic circuit is researched and designed. And the pneumatic circuit is designed with the powerful functions of PLC, the hardware and software design are introduced. The system has advantages of strong striking force, high velocity, small gas consumption, simple structure and convenient control.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 417-428
Author(s):  
Yanyan Dai ◽  
KiDong Lee ◽  
SukGyu Lee

For real applications, rotary inverted pendulum systems have been known as the basic model in nonlinear control systems. If researchers have no deep understanding of control, it is difficult to control a rotary inverted pendulum platform using classic control engineering models, as shown in section 2.1. Therefore, without classic control theory, this paper controls the platform by training and testing reinforcement learning algorithm. Many recent achievements in reinforcement learning (RL) have become possible, but there is a lack of research to quickly test high-frequency RL algorithms using real hardware environment. In this paper, we propose a real-time Hardware-in-the-loop (HIL) control system to train and test the deep reinforcement learning algorithm from simulation to real hardware implementation. The Double Deep Q-Network (DDQN) with prioritized experience replay reinforcement learning algorithm, without a deep understanding of classical control engineering, is used to implement the agent. For the real experiment, to swing up the rotary inverted pendulum and make the pendulum smoothly move, we define 21 actions to swing up and balance the pendulum. Comparing Deep Q-Network (DQN), the DDQN with prioritized experience replay algorithm removes the overestimate of Q value and decreases the training time. Finally, this paper shows the experiment results with comparisons of classic control theory and different reinforcement learning algorithms.


2020 ◽  
pp. 110708
Author(s):  
Dasheng Lee ◽  
Chien-Jung Lin ◽  
Chih-Wei Lai ◽  
Tsai Huang

2012 ◽  
Vol 580 ◽  
pp. 155-159
Author(s):  
Xiang Ming Wang ◽  
Jin Chao Wang ◽  
Dong Hua Sun

In this paper, the real-time EtherCAT technology is introduced in detail, which including operating principle, communication protocol and the superiority performance of EtherCAT i.e. synchronicity, simultaneousness and high speed. To show how to design a slave system that considering the characteristics of application, the method of developing systems based no EtherCAT technology are proposed. Finally, a data acquisition system based on EtherCAT technology is designed. Application of EtherCAT technology can improve the real-time characteristics of data communication in wind power system.


Sign in / Sign up

Export Citation Format

Share Document