Operation Scheduling Optimization of Refinery Logistics Pipeline Network

2013 ◽  
Vol 411-414 ◽  
pp. 2746-2751
Author(s):  
Ming Li

The logistics pipeline scheduling problem of oil refining process was researched in this paper. The problem has been received significant attentions in actual production process. A MILP scheduling optimization model was built by formulating the complex logistics pipeline network of oil refining process. Compared to other researches, the presented model in this paper has fewer variables and more concise constrains which can be solved more efficiently. The formulation approach also provides a basis for the further research on energy network optimization. Finally, the presented model was used to address the scheduling of a refinery. Case study shows that the obtained optimal schedule satisfies real requirements, which illustrates the models efficiency.

2021 ◽  
Vol 40 (1) ◽  
pp. 15-22
Author(s):  
Mohammed Hussein ◽  
Elzahid N.M ◽  
Ebrahim Esmail ◽  
Mamdouh Gadalla ◽  
Ibrahim Ashour

2021 ◽  
Vol 7 (4) ◽  
pp. 64
Author(s):  
Tanguy Ophoff ◽  
Cédric Gullentops ◽  
Kristof Van Beeck ◽  
Toon Goedemé

Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be detected, less intra-class variance, less lighting and background variance, constrained or even fixed camera viewpoints, etc. In these cases, we hypothesize that smaller networks could be used without deteriorating the accuracy. However, there are multiple reasons why this does not happen in practice. Firstly, overparameterized networks tend to learn better, and secondly, transfer learning is usually used to reduce the necessary amount of training data. In this paper, we investigate how much we can reduce the computational complexity of a standard object detection network in such constrained object detection problems. As a case study, we focus on a well-known single-shot object detector, YoloV2, and combine three different techniques to reduce the computational complexity of the model without reducing its accuracy on our target dataset. To investigate the influence of the problem complexity, we compare two datasets: a prototypical academic (Pascal VOC) and a real-life operational (LWIR person detection) dataset. The three optimization steps we exploited are: swapping all the convolutions for depth-wise separable convolutions, perform pruning and use weight quantization. The results of our case study indeed substantiate our hypothesis that the more constrained a problem is, the more the network can be optimized. On the constrained operational dataset, combining these optimization techniques allowed us to reduce the computational complexity with a factor of 349, as compared to only a factor 9.8 on the academic dataset. When running a benchmark on an Nvidia Jetson AGX Xavier, our fastest model runs more than 15 times faster than the original YoloV2 model, whilst increasing the accuracy by 5% Average Precision (AP).


2012 ◽  
Vol 23 (9) ◽  
pp. 1583-1592 ◽  
Author(s):  
Yoshihiro Sugaya ◽  
Shinichiro Omachi ◽  
Akira Takeuchi ◽  
Yousuke Nozaki

2021 ◽  
Vol 336 ◽  
pp. 05020
Author(s):  
Piotr Hadaj ◽  
Marek Nowak ◽  
Dominik Strzałka

A case study based on the real data obtained from the Polish PSE System Operator of the highest voltages electrical energy network is shown. The data about the interconnection exchange and some complex networks (graphs) parameters were examined, after the removal of selected nodes. This allowed to test selected network parameters and to show that the breakdown of only three nodes in this network can cause significant drop of its average efficiency.


Author(s):  
M. A. Ancona ◽  
M. Bianchi ◽  
L. Branchini ◽  
A. De Pascale ◽  
F. Melino ◽  
...  

Abstract In order to increase the exploitation of the renewable energy sources, the diffusion of the distributed generation systems is grown, leading to an increase in the complexity of the electrical, thermal, cooling and fuel energy distribution networks. With the main purpose of improving the overall energy conversion efficiency and reducing the greenhouse gas emissions associated to fossil fuel based production systems, the design and the management of these complex energy grids play a key role. In this context, an in-house developed software, called COMBO, presented and validated in the Part I of this study, has been applied to a case study in order to define the optimal scheduling of each generation system connected to a complex energy network. The software is based on a non-heuristic technique which considers all the possible combination of solutions, elaborating the optimal scheduling for each energy system by minimizing an objective function based on the evaluation of the total energy production cost and energy systems environmental impact. In particular, the software COMBO is applied to a case study represented by an existing small-scale complex energy network, with the main objective of optimizing the energy production mix and the complex energy networks yearly operation depending on the energy demand of the users. The electrical, thermal and cooling needs of the users are satisfied with a centralized energy production, by means of internal combustion engines, natural gas boilers, heat pumps, compression and absorption chillers. The optimal energy systems operation evaluated by the software COMBO will be compared to a Reference Case, representative of the current energy systems set-up, in order to highlight the environmental and economic benefits achievable with the proposed strategy.


Sign in / Sign up

Export Citation Format

Share Document