Energy Saving Design Principle Analysis of Power Electronic Transformation System

2013 ◽  
Vol 760-762 ◽  
pp. 1343-1347
Author(s):  
Tao Wan

Power electronic transformation system is applied widely in industrial control and the application environment is complex. Big, small and medium-sized system power consumption improves continuously, so it is urgent to reduce the system energy consumption problems. This paper proposes a way to reduce the energy consumption of power electronic transformation system based on genetic algorithm. Work frequency regulation and working voltage measurement technology are used in industrial control system and the voltage and frequency produced by system power consumption are calculated. Genetic algorithm is used to calculate the optimal solution. And then achieve the purpose of reducing energy consumption. Experimental results show that this control algorithm can effectively reduce the power consumption of power electronic transformation system in industrial control and has a good effect.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jayati Athavale ◽  
Minami Yoda ◽  
Yogendra Joshi

Purpose This study aims to present development of genetic algorithm (GA)-based framework aimed at minimizing data center cooling energy consumption by optimizing the cooling set-points while ensuring that thermal management criteria are satisfied. Design/methodology/approach Three key components of the developed framework include an artificial neural network-based model for rapid temperature prediction (Athavale et al., 2018a, 2019), a thermodynamic model for cooling energy estimation and GA-based optimization process. The static optimization framework informs the IT load distribution and cooling set-points in the data center room to simultaneously minimize cooling power consumption while maximizing IT load. The dynamic framework aims to minimize cooling power consumption in the data center during operation by determining most energy-efficient set-points for the cooling infrastructure while preventing temperature overshoots. Findings Results from static optimization framework indicate that among the three levels (room, rack and row) of IT load distribution granularity, Rack-level distribution consumes the least cooling power. A test case of 7.5 h implementing dynamic optimization demonstrated a reduction in cooling energy consumption between 21%–50% depending on current operation of data center. Research limitations/implications The temperature prediction model used being data-driven, is specific to the lab configuration considered in this study and cannot be directly applied to other scenarios. However, the overall framework can be generalized. Practical implications The developed framework can be implemented in data centers to optimize operation of cooling infrastructure and reduce energy consumption. Originality/value This paper presents a holistic framework for improving energy efficiency of data centers which is of critical value given the high (and increasing) energy consumption by these facilities.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Madhusudhan H S ◽  
Satish Kumar T ◽  
S.M.F D Syed Mustapha ◽  
Punit Gupta ◽  
Rajan Prasad Tripathi

In cloud computing, the virtualization technique is a significant technology to optimize the power consumption of the cloud data center. In this generation, most of the services are moving to the cloud resulting in increased load on data centers. As a result, the size of the data center grows and hence there is more energy consumption. To resolve this issue, an efficient optimization algorithm is required for resource allocation. In this work, a hybrid approach for virtual machine allocation based on genetic algorithm (GA) and the random forest (RF) is proposed which belongs to a class of supervised machine learning techniques. The aim of the work is to minimize power consumption while maintaining better load balance among available resources and maximizing resource utilization. The proposed model used a genetic algorithm to generate a training dataset for the random forest model and further get a trained model. The real-time workload traces from PlanetLab are used to evaluate the approach. The results showed that the proposed GA-RF model improves energy consumption, execution time, and resource utilization of the data center and hosts as compared to the existing models. The work used power consumption, execution time, resource utilization, average start time, and average finish time as performance metrics.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4089
Author(s):  
Kaiqiang Zhang ◽  
Dongyang Ou ◽  
Congfeng Jiang ◽  
Yeliang Qiu ◽  
Longchuan Yan

In terms of power and energy consumption, DRAMs play a key role in a modern server system as well as processors. Although power-aware scheduling is based on the proportion of energy between DRAM and other components, when running memory-intensive applications, the energy consumption of the whole server system will be significantly affected by the non-energy proportion of DRAM. Furthermore, modern servers usually use NUMA architecture to replace the original SMP architecture to increase its memory bandwidth. It is of great significance to study the energy efficiency of these two different memory architectures. Therefore, in order to explore the power consumption characteristics of servers under memory-intensive workload, this paper evaluates the power consumption and performance of memory-intensive applications in different generations of real rack servers. Through analysis, we find that: (1) Workload intensity and concurrent execution threads affects server power consumption, but a fully utilized memory system may not necessarily bring good energy efficiency indicators. (2) Even if the memory system is not fully utilized, the memory capacity of each processor core has a significant impact on application performance and server power consumption. (3) When running memory-intensive applications, memory utilization is not always a good indicator of server power consumption. (4) The reasonable use of the NUMA architecture will improve the memory energy efficiency significantly. The experimental results show that reasonable use of NUMA architecture can improve memory efficiency by 16% compared with SMP architecture, while unreasonable use of NUMA architecture reduces memory efficiency by 13%. The findings we present in this paper provide useful insights and guidance for system designers and data center operators to help them in energy-efficiency-aware job scheduling and energy conservation.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1800
Author(s):  
Linfei Hou ◽  
Fengyu Zhou ◽  
Kiwan Kim ◽  
Liang Zhang

The four-wheeled Mecanum robot is widely used in various industries due to its maneuverability and strong load capacity, which is suitable for performing precise transportation tasks in a narrow environment. While the Mecanum wheel robot has mobility, it also consumes more energy than ordinary robots. The power consumed by the Mecanum wheel mobile robot varies enormously depending on their operating regimes and environments. Therefore, only knowing the working environment of the robot and the accurate power consumption model can we accurately predict the power consumption of the robot. In order to increase the applicable scenarios of energy consumption modeling for Mecanum wheel robots and improve the accuracy of energy consumption modeling, this paper focuses on various factors that affect the energy consumption of the Mecanum wheel robot, such as motor temperature, terrain, the center of gravity position, etc. The model is derived from the kinematic and kinetic model combined with electrical engineering and energy flow principles. The model has been simulated in MATLAB and experimentally validated with the four-wheeled Mecanum robot platform in our lab. Experimental results show that the accuracy of the model reached 95%. The results of energy consumption modeling can help robots save energy by helping them to perform rational path planning and task planning.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 344
Author(s):  
Alejandro Humberto García Ruiz ◽  
Salvador Ibarra Martínez ◽  
José Antonio Castán Rocha ◽  
Jesús David Terán Villanueva ◽  
Julio Laria Menchaca ◽  
...  

Electricity is one of the most important resources for the growth and sustainability of the population. This paper assesses the energy consumption and user satisfaction of a simulated air conditioning system controlled with two different optimization algorithms. The algorithms are a genetic algorithm (GA), implemented from the state of the art, and a non-dominated sorting genetic algorithm II (NSGA II) proposed in this paper; these algorithms control an air conditioning system considering user preferences. It is worth noting that we made several modifications to the objective function’s definition to make it more robust. The energy-saving optimization is essential to reduce CO2 emissions and economic costs; on the other hand, it is desirable for the user to feel comfortable, yet it will entail a higher energy consumption. Thus, we integrate user preferences with energy-saving on a single weighted function and a Pareto bi-objective problem to increase user satisfaction and decrease electrical energy consumption. To assess the experimentation, we constructed a simulator by training a backpropagation neural network with real data from a laboratory’s air conditioning system. According to the results, we conclude that NSGA II provides better results than the state of the art (GA) regarding user preferences and energy-saving.


Author(s):  
Zhuofan Liao ◽  
Jingsheng Peng ◽  
Bing Xiong ◽  
Jiawei Huang

AbstractWith the combination of Mobile Edge Computing (MEC) and the next generation cellular networks, computation requests from end devices can be offloaded promptly and accurately by edge servers equipped on Base Stations (BSs). However, due to the densified heterogeneous deployment of BSs, the end device may be covered by more than one BS, which brings new challenges for offloading decision, that is whether and where to offload computing tasks for low latency and energy cost. This paper formulates a multi-user-to-multi-servers (MUMS) edge computing problem in ultra-dense cellular networks. The MUMS problem is divided and conquered by two phases, which are server selection and offloading decision. For the server selection phases, mobile users are grouped to one BS considering both physical distance and workload. After the grouping, the original problem is divided into parallel multi-user-to-one-server offloading decision subproblems. To get fast and near-optimal solutions for these subproblems, a distributed offloading strategy based on a binary-coded genetic algorithm is designed to get an adaptive offloading decision. Convergence analysis of the genetic algorithm is given and extensive simulations show that the proposed strategy significantly reduces the average latency and energy consumption of mobile devices. Compared with the state-of-the-art offloading researches, our strategy reduces the average delay by 56% and total energy consumption by 14% in the ultra-dense cellular networks.


Sign in / Sign up

Export Citation Format

Share Document