Embedded Digital Twins in future energy management systems: paving the way for automated grid control

2020 ◽  
Vol 68 (9) ◽  
pp. 750-764
Author(s):  
Christoph Brosinsky ◽  
Rainer Krebs ◽  
Dirk Westermann

AbstractEmerging real-time applications in information technology, and operational technology enable new innovative concepts to design and operate cyber-physical systems. A promising approach, which has been discovered recently as key technology by several industries is the Digital Twin (DT) concept. A DT connects the virtual representation of a physical object, system or process by available information and sensor data streams, which allows to gather new information about the system it mirrors by applying analytic functions. Thereby the DT technology can help to fill sensor data gaps, e. g., to support anomaly detection, and to predict future operating conditions and system states. This paper discusses a dynamic power system DT as a cornerstone instance of a new generation of EMS, and a prospective new EMS architecture, to support the increasingly complex operation of electric power systems. Unlike in traditional offline power system models, the parameters are updated dynamically using measurement information from the supervisory control and data acquisition (SCADA) and a wide area monitoring system (WAMS) to tune the model. This allows to derive a highly accurate virtual representation of the mirrored physical objects. A simulation engine, the Digital Dynamic Mirror (DDM) is introduced, in order to be able to reproduce the state of a reference network in real-time. The validation of the approach is carried out by a case study. In a closed loop within EMS applications, the DDM can help to assess contingency mitigation strategies, thus it can support the decision-making process under variable system conditions. The next generation of control centre Energy Management System (EMS) can benefit from this development by augmentation of the dynamic observability, and the rise of operator situation awareness.

Author(s):  
Roberto Netto ◽  
Guilherme Ramalho ◽  
Benedito Bonatto ◽  
Otavio Carpinteiro ◽  
A. C. Zambroni de Souza ◽  
...  

This paper deals with the problem of real-time management of Smart Grids. For this sake, the energy management is integrated with the power system through a telecommunication system. The use of Multiagent Systems leads the proposed algorithm to find the best-integrated solution, taking into consideration the operating scenario and the system characteristics. The proposed technique is tested with the help of an academic microgrid, so the results may be replicated.


2022 ◽  
Vol 1216 (1) ◽  
pp. 012009
Author(s):  
P Baran ◽  
Y Varetsky ◽  
V Kidyba ◽  
Y Pryshliak

Abstract The mathematical model is developed for a virtual training system (simulator) of the power unit electrical part operators of a thermal (nuclear) power plant. The model is used to simulating the main operating conditions of the power unit electrical part: generator idling, generator synchronization with the power system, excitation shifting from the main unit to the backup one and vice versa, switching in the power unit auxiliary system, and others. Furthermore, it has been implemented modelling some probable emergency conditions within a power plant: incomplete phase switching, damage to standard power unit equipment, synchronous oscillations, asynchronous mode, etc. The model of the power unit electrical part consists of two interacting software units: models of power equipment (turbine, generator with excitation systems, auxiliary system) and models of its control systems, automation, relay protection and signalling. The models are represented by the corresponding algebraic-differential equations that provide real-time mapping power unit processes at the operator’s request. The developed model uses optimal solving algebraic-differential equations to ensure the virtual process behaviour in real-time. In particular, the implicit Euler method is used to solve differential equations, which is stable when simulating processes in significant disturbances, such as accidental disconnection of the unit from the power system, tripping and energizing loads, generator excitation loss, etc.


2020 ◽  
Vol 64 (02) ◽  
pp. 171-184
Author(s):  
Nengqi Xiao ◽  
Xiang Xu ◽  
Baojia Chen

This article introduces the composition and 12 operating conditions of a four-engine two-propeller hybrid power system. Through the combination of gearbox clutch and disconnection, the propulsion system has four single-engine operation modes, two double-engine parallel operation modes, and six PTI operation modes. Because the propulsion system has a variety of operating conditions, each operating condition has a form of energy transfer. As a result, its energy management and control are more complicated. To study the energy management and control strategy of a diesel- electric hybrid propulsion system, this work mainly studies the simulation model and sub-models of a diesel-electric hybrid propulsion system. In this study, MATLAB/ SIMULINK software is used to build the diesel engine model, motor model, and ship engine system mathematical model. The test and analysis were carried out on the test bench of the diesel-electric hybrid power system. By comparing the theoretical value of the SIMULINK simulation model with the test value of the test bench system, the correctness of each sub-model modeling method is verified. On the one hand, research on the text lays a theoretical foundation for the subsequent implementation of the conventional energy management and control strategy based on state identification on the unified management and distribution of the diesel-electric hybrid power system. At the same time, energy management of the diesel-electric hybrid system is also carried out. Optimization research provides theoretical guidance.


Author(s):  
Dana DuToit ◽  
Kent Ryan ◽  
John Rice ◽  
James Bay ◽  
Fabien Ravet

Long range, distributed fiber optic sensing systems have been an available tool for more than a decade to monitor pipeline subsidence integrity challenges. Effective deployment scenarios are an important decision to be factored into the selection of this monitoring equipment and typologies relative to specific project needs. In an effort to analyze the effectiveness of various fiber optic deployment conditions, a controlled field experiment was conducted. Within this field experiment, a variety of distributed fiber optic sensors and point sensors were deployed in predefined positions. These positions relative to the pipeline were selected to support a range of deployment needs including new construction or retrofitting of existing pipelines. A 16-inch diameter by 60-meter long epoxy coated pipeline that was capable of being pressurized to mimic operating conditions was utilized. This test pipe was installed in a typical trench setting. Conventional point gauges were installed at key locations on the pipeline. Fiber optic sensor cables were installed at key locations providing 14 alternative scenarios in terms of sensitivity, accuracy, and cost. After construction of the test pipeline, real time continuous monitoring via the array of conventional and fiber optic sensors commenced. A deep trench was excavated adjacent and parallel to the central portion of the pipeline which began to induce subsidence in the test pipeline. Continued monitoring of the various sensors produced real time visualization of the evolving subsidence. A comparison of the reaction of the sensors is compiled to provide an intelligent selection criteria for integrity managers in terms of accuracy, deployment, and costs for pipeline subsidence monitoring projects. In addition, further analysis of this sensor data should provide more insight into pipeline/soil interaction models and behaviors.


Author(s):  
Dana DuToit ◽  
Kent Ryan ◽  
John Rice ◽  
James Bay ◽  
Jorge Peralta

Long range, distributed fiber optic sensing systems have been an available tool for more than a decade to monitor pipeline subsidence integrity challenges. Effective deployment scenarios are an important decision to be factored into the selection of this monitoring equipment and typologies relative to specific project needs. In an effort to analyze the effectiveness of various fiber optic deployment conditions, a controlled field experiment was conducted. Within this field experiment, a variety of distributed fiber optic sensors and point sensors were deployed in predefined positions. These positions relative to the pipeline were selected to support a range of deployment needs including new construction or retrofitting of existing pipelines. A 16-inch diameter by 60-meter long epoxy coated pipeline that was capable of being pressurized to mimic operating conditions was utilized. This test pipe was installed in a typical trench setting. Conventional point gauges were installed at key locations on the pipeline. Fiber optic sensor cables were installed at key locations providing 14 alternative scenarios in terms of sensitivity, accuracy, and cost. After construction of the test pipeline, real time continuous monitoring via the array of conventional and fiber optic sensors commenced. A deep trench was excavated adjacent and parallel to the central portion of the pipeline which began to induce subsidence in the test pipeline. Continued monitoring of the various sensors produced real time visualization of the evolving subsidence. A comparison of the reaction of the sensors is compiled to provide an intelligent selection criteria for integrity managers in terms of accuracy, deployment, and costs for pipeline subsidence monitoring projects. In addition, further analysis of this sensor data should provide more insight into pipeline/soil interaction models and behaviors.


2019 ◽  
Vol 25 (3) ◽  
pp. 499-524
Author(s):  
Kurt Azevedo ◽  
Daniel B. Olsen

Purpose The purpose of this paper is to determine whether the altitude at which construction equipment operates affects or contributes to increased engine wear. Design/methodology/approach The study includes the evaluation of two John Deere PowerTech Plus 6,068 Tier 3 diesel engines, the utilization of OSA3 oil analysis laboratory equipment to analyze oil samples, the employment of standard sampling scope and methods, and the analysis of key Engine Control Unit (ECU) data points (machine utilization, Diagnostic Trouble Codes (DTCs) and engine sensor data). Findings At 250 h of engine oil use, the engine operating at 3,657 meters above sea level (MASL) had considerably more wear than the engine operating at 416 MASL. The leading and earliest indicator of engine wear was a high level of iron particles in the engine oil, reaching abnormal levels at 218 h. The following engine oil contaminants were more prevalent in the engine operating at the higher altitude: potassium, glycol, water and soot. Furthermore, the engine operating at higher altitude also presented abnormal and critical levels of oil viscosity, Total Base Number and oxidation. When comparing the oil sample analysis with the engine ECU data, it was determined that engine idling is a contributor for soot accumulation in the engine operating at the higher altitude. The most prevalent DTCs were water in fuel, extreme low coolant levels and extreme high exhaust manifold temperature. The ECU operating data demonstrated that the higher altitude environment caused the engine to miss-fire and rail pressure was irregular. Practical implications Many of the mining operations and construction projects are accomplished at mid to high altitudes. This research provides a comparison of how construction equipment engines are affected by this type of environment (i.e. higher altitudes, cooler temperatures and lower atmospheric pressure). Consequently, service engineers can implement maintenance strategies to minimize internal engine wear for equipment operating at higher altitudes. Originality/value The main contribution of this paper will help construction equipment end-users, maintenance engineers and manufacturers to implement mitigation strategies to improve engine durability for countries with operating conditions similar to those described in this research.


Author(s):  
Elisa Negri ◽  
Vibhor Pandhare ◽  
Laura Cattaneo ◽  
Jaskaran Singh ◽  
Marco Macchi ◽  
...  

Abstract Research on scheduling problems is an evergreen challenge for industrial engineers. The growth of digital technologies opens the possibility to collect and analyze great amount of field data in real-time, representing a precious opportunity for an improved scheduling activity. Thus, scheduling under uncertain scenarios may benefit from the possibility to grasp the current operating conditions of the industrial equipment in real-time and take them into account when elaborating the best production schedules. To this end, the article proposes a proof-of-concept of a simheuristics framework for robust scheduling applied to a Flow Shop Scheduling Problem. The framework is composed of genetic algorithms for schedule optimization and discrete event simulation and is synchronized with the field through a Digital Twin (DT) that employs an Equipment Prognostics and Health Management (EPHM) module. The contribution of the EPHM module inside the DT-based framework is the real time computation of the failure probability of the equipment, with data-driven statistical models that take sensor data from the field as input. The viability of the framework is demonstrated in a flow shop application in a laboratory environment.


Author(s):  
Huijing Jiang ◽  
Xinwei Deng ◽  
Vanessa Lopez ◽  
Hendrik Hamann

Energy consumption of data center has increased dramatically due to the massive computing demands driven from every sector of the economy. Hence, data center energy management has become very important for operating data centers within environmental standards while achieving low energy cost. In order to advance the understanding of thermal management in data centers, relevant environmental information such as temperature, humidity and air quality are gathered through a network of real-time sensors or simulated via sophisticated physical models (e.g. computational fluid dynamics models). However, sensor readings of environmental parameters are collected only at sparse locations and thus cannot provide a detailed map of temperature distribution for the entire data center. While the physics models yield high resolution temperature maps, it is often not feasible, due to computational complexity of these models, to run them in real-time, which is ideally required for optimum data center operation and management. In this work, we propose a novel statistical modeling approach to updating physical model outputs in real-time and providing automatic scheduling for re-computing physical model outputs. The proposed method dynamically corrects the discrepancy between a steady-state output of the physical model and real-time thermal sensor data. We show that the proposed method can provide valuable information for data center energy management such as real-time high-resolution thermal maps. Moreover, it can efficiently detect systematic changes in a data center thermal environment, and automatically schedule physical models to be re-executed whenever significant changes are detected.


Sign in / Sign up

Export Citation Format

Share Document