scholarly journals Energy Saving Schemes for Scalable Mobile Computing Networks

2021 ◽  
Author(s):  
Ali Alnoman

With the growing popularity of smart applications that contain computing-intensive tasks, the provision of radio and computing resources with high quality is becoming more and more challenging. Moreover, supporting network scalability is crucial to accommodate the massive numbers of connected devices. In this thesis, we present effective energy saving strategies that consider the utilization of network elements such as base stations and virtual machines, and implement on/off mechanisms taking into account the quality of service (QoS) required by mobile users. Moreover, we investigate the performance of a NOMA-based resource allocation scheme in the context of Internet of Things aiming to improve network scalability and reduce the energy consumption of mobile users. The system model is mainly built upon the M/M/k queueing system that has been widely used in most relevant works. First, the energy saving mechanism is formulated as a 0-1 knapsack problem where the weight and value of each small base station is determined by the utilization and proportion of computing tasks at that base station, respectively. The problem is then solved using the dynamic programming approach which showed significant energy saving performance while maintaining the cloud response time at desired levels. Afterwards, the energy saving mechanism is applied on edge computing to reduce the amount of under-utilized virtual machines in edge devices. Herein, the square-root staffing rule and the Halfin-Whitt function are used to determine the minimum number of virtual machines required to maintain the queueing probability below a threshold value. On the user level, reducing energy consumption can be achieved by maximizing data rate provision to reduce the task completion time, and hence, the transmission energy. Herein, a NOMA-based scheme is introduced, particularly, the sparse code multiple access (SCMA) technique that allows subcarriers to be shared by multiple users. Not only does SCMA help provide higher data rates but also increase the number of accommodated users. In this context, a power optimization and codebook allocation problems are formulated and solved using the water-filling and heuristic approaches, respectively. Results show that SCMA can significantly improve data rate provision and accommodate more mobile users with improved user satisfaction.

2021 ◽  
Author(s):  
Ali Alnoman

With the growing popularity of smart applications that contain computing-intensive tasks, the provision of radio and computing resources with high quality is becoming more and more challenging. Moreover, supporting network scalability is crucial to accommodate the massive numbers of connected devices. In this thesis, we present effective energy saving strategies that consider the utilization of network elements such as base stations and virtual machines, and implement on/off mechanisms taking into account the quality of service (QoS) required by mobile users. Moreover, we investigate the performance of a NOMA-based resource allocation scheme in the context of Internet of Things aiming to improve network scalability and reduce the energy consumption of mobile users. The system model is mainly built upon the M/M/k queueing system that has been widely used in most relevant works. First, the energy saving mechanism is formulated as a 0-1 knapsack problem where the weight and value of each small base station is determined by the utilization and proportion of computing tasks at that base station, respectively. The problem is then solved using the dynamic programming approach which showed significant energy saving performance while maintaining the cloud response time at desired levels. Afterwards, the energy saving mechanism is applied on edge computing to reduce the amount of under-utilized virtual machines in edge devices. Herein, the square-root staffing rule and the Halfin-Whitt function are used to determine the minimum number of virtual machines required to maintain the queueing probability below a threshold value. On the user level, reducing energy consumption can be achieved by maximizing data rate provision to reduce the task completion time, and hence, the transmission energy. Herein, a NOMA-based scheme is introduced, particularly, the sparse code multiple access (SCMA) technique that allows subcarriers to be shared by multiple users. Not only does SCMA help provide higher data rates but also increase the number of accommodated users. In this context, a power optimization and codebook allocation problems are formulated and solved using the water-filling and heuristic approaches, respectively. Results show that SCMA can significantly improve data rate provision and accommodate more mobile users with improved user satisfaction.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Hongli Zhang ◽  
Panpan Li ◽  
Zhigang Zhou

The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc.) on multiple virtual machines (VMs) for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 344
Author(s):  
Alejandro Humberto García Ruiz ◽  
Salvador Ibarra Martínez ◽  
José Antonio Castán Rocha ◽  
Jesús David Terán Villanueva ◽  
Julio Laria Menchaca ◽  
...  

Electricity is one of the most important resources for the growth and sustainability of the population. This paper assesses the energy consumption and user satisfaction of a simulated air conditioning system controlled with two different optimization algorithms. The algorithms are a genetic algorithm (GA), implemented from the state of the art, and a non-dominated sorting genetic algorithm II (NSGA II) proposed in this paper; these algorithms control an air conditioning system considering user preferences. It is worth noting that we made several modifications to the objective function’s definition to make it more robust. The energy-saving optimization is essential to reduce CO2 emissions and economic costs; on the other hand, it is desirable for the user to feel comfortable, yet it will entail a higher energy consumption. Thus, we integrate user preferences with energy-saving on a single weighted function and a Pareto bi-objective problem to increase user satisfaction and decrease electrical energy consumption. To assess the experimentation, we constructed a simulator by training a backpropagation neural network with real data from a laboratory’s air conditioning system. According to the results, we conclude that NSGA II provides better results than the state of the art (GA) regarding user preferences and energy-saving.


2011 ◽  
Vol 3 (2) ◽  
pp. 29-43 ◽  
Author(s):  
Ming-Jeng Yang ◽  
Chin-Lin Kuo ◽  
Yao-Ming Yeh

Virtualization and partitioning are the means by which multiple application instances can share and run multiple virtual machines supported by a platform. In a Green Cloud environment, the goal is to consolidate multiple applications onto virtual machines associated by fewer servers, and reduce cost and complexity, increase agility, and lower power and cooling costs. To make Cloud center greener, it is beneficial to limit the amount of active servers to minimize energy consumption. This paper presents a precise model to formulate the right-sizing and energy-saving mechanism, which not only minimizes energy consumption of the server but also maintains a service quality through the Mt/M/Vt strategy of queuing theory. The authors map the complicated formula of the energy-saving mechanism to an approximation equation and design the fast decidable algorithms for calculating the right size of virtual machines in constant time complexity for power management systems.


2021 ◽  
Vol 2083 (3) ◽  
pp. 032026
Author(s):  
Yuxuan Wang

Abstract As China’s new infrastructure,5G has received national and social attention. 5G promotes economic to grow rapidly. But, the high energy consumption caused by the massive deployment of 5G base stations cannot be ignored. The total annual power consumption is expected to reach 243 billion degrees when the 5G base station is fully built. In the tidal scene, some 5G base station in an idle state still power fully, which causes great power waste. The historical volume of base station business data is used to train LSTM model, and predict the future base station business. When the business is lower than the threshold, the base station will be closed to avoid unnecessary power waste. And the LSTM model prediction results fits the original data ideally. By implementing the power saving strategy, the energy consumption of the base station is reduced by 18.97 %. A single station can save 1174 degrees of electricity yearly. It can be seen that the energy saving effect is remarkable.


Author(s):  
Ming-Jeng Yang ◽  
Chin-Lin Kuo ◽  
Yao-Ming Yeh

Virtualization and partitioning are the means by which multiple application instances can share and run multiple virtual machines supported by a platform. In a Green Cloud environment, the goal is to consolidate multiple applications onto virtual machines associated by fewer servers, and reduce cost and complexity, increase agility, and lower power and cooling costs. To make Cloud center greener, it is beneficial to limit the amount of active servers to minimize energy consumption. This paper presents a precise model to formulate the right-sizing and energy-saving mechanism, which not only minimizes energy consumption of the server but also maintains a service quality through the Mt/M/Vt strategy of queuing theory. The authors map the complicated formula of the energy-saving mechanism to an approximation equation and design the fast decidable algorithms for calculating the right size of virtual machines in constant time complexity for power management systems.


2021 ◽  
pp. 1-12
Author(s):  
Lv Feng

In recent years, scientists have begun to introduce dynamic elements into wireless networks. With the introduction of mobile sink node, the phenomenon of “hot node” and “energy hole” can be effectively avoided, so as to realize more network connection and improve network flexibility. Therefore, it is imperative to design energy-saving algorithm with popular sink code. In this paper, a multi hop data forwarding algorithm is proposed for solar powered wireless sensor networks. The algorithm divides the monitoring area of the network and the communication area of the node. Through the sensor node, the next hop node is selected from the appropriate area, thus forming the path from the data source point to the base station. At the same time, in order to reduce the energy consumption and delay in the network, a multi-objective programming model of the next hop data forwarding node is established. The reasonable area of static and dynamic area is calculated by mathematical analysis. Finally, the paper calculates the network’s life cycle, energy consumption and transmission time, and compares the static sink with the network using only mobile sink.


2013 ◽  
Vol 411-414 ◽  
pp. 634-637
Author(s):  
Pei Pei Jiang ◽  
Cun Qian Yu ◽  
Yu Huai Peng

In recent years, with the rapid expansion of network scale and types of applications, cloud computing and virtualization technology have been widely used in the data centers, providing a fast, flexible and convenient service. However, energy efficiency has increased dramatically. The problem of energy consumption has been widespread concern around the world. In this paper, we study the energy-saving in optical data center networks. First, we summarize the traditional methods of energy-saving and meanwhile reveal that the predominant energy consuming resources are the servers installed in the data centers. Then we present the server virtualization technologies based on Virtual Machines (VMs) that have been used widely to reduce energy consumption of servers. Results show server consolidation based on VM migration can efficiently reduce the overall energy consumption compared with traditional energy-saving approaches by reducing energy consumption of the entire network infrastructure in data center networks. For future work, we will study server consolidation based on VM migration in actual environment and address QoS requirements and access latency.


2015 ◽  
Vol 8 (1) ◽  
pp. 206-210 ◽  
Author(s):  
Yu Junyang ◽  
Hu Zhigang ◽  
Han Yuanyuan

Current consumption of cloud computing has attracted more and more attention of scholars. The research on Hadoop as a cloud platform and its energy consumption has also received considerable attention from scholars. This paper presents a method to measure the energy consumption of jobs that run on Hadoop, and this method is used to measure the effectiveness of the implementation of periodic tasks on the platform of Hadoop. Combining with the current mainstream of energy estimate formula to conduct further analysis, this paper has reached a conclusion as how to reduce energy consumption of Hadoop by adjusting the split size or using appropriate size of workers (servers). Finally, experiments show the effectiveness of these methods as being energy-saving strategies and verify the feasibility of the methods for the measurement of periodic tasks at the same time.


Sign in / Sign up

Export Citation Format

Share Document