scholarly journals Optimization of mobile device energy consumption in a fog-based mobile computing offloading mechanism

Author(s):  
Anastasia V. Daraseliya ◽  
Eduard S. Sopin

The offloading of computing tasks to the fog computing system is a promising approach to reduce the response time of resource-greedy real-time mobile applications. Besides the decreasing of the response time, the offloading mechanisms may reduce the energy consumption of mobile devices. In the paper, we focused on the analysis of the energy consumption of mobile devices that use fog computing infrastructure to increase the overall system performance and to improve the battery life. We consider a three-layer computing architecture, which consists of the mobile device itself, a fog node, and a remote cloud. The tasks are processed locally or offloaded according to the threshold-based offloading criterion. We have formulated an optimization problem that minimizes the energy consumption under the constraints on the average response time and the probability that the response time is lower than a certain threshold. We also provide the numerical solution to the optimization problem and discuss the numerical results.

Author(s):  
S. Anitha ◽  
T. Padma

Due to the drastic exploitation of mobile devices and mobile apps in the day-to-day activities of people, the enhancement in hardware and software tools for mobile devices is also rising rapidly to cater to the requirements of mobile users. However, the progress of resource-intensive mobile applications is still inhibited by the limited battery power, restricted memory, and scarce resources of mobile devices. By employing mobile cloud computing, mobile edge computing, and fog computing, many researchers are providing their frameworks and offloading algorithms to augment the resources of mobile devices. In the existing solutions, offloading resource-intensive tasks is adopted only for specific scenarios and also not supporting the flexible exploitation of IoT-based smart mobile applications. So, a novel neuro-fuzzy modeling framework is proposed to augment the inadequate resources of a mobile device by offloading the resource-intensive tasks to external entities, and also a Bat optimization algorithm is exploited to schedule as many tasks as possible to the augmentation entities thereby improving the total execution time of all tasks and minimizing the resource exploitation of the mobile device. In this research work, external augmentation entities like distant cloud, edge cloud, and microcontroller devices are providing Resource augmentation as a Service (RaaS) to mobile devices. An IoT-based smart transport mobile app is implemented based on the proposed framework which depicts a significant reduction in execution time, energy consumption, bandwidth utilization, and average delay. Performance analysis depicts that the neuro-fuzzy hybrid model with Bat optimization provides a significant improvement compared with proximate computing and web service frameworks on the Quality of Service (QoS) parameters namely energy consumption, execution time, bandwidth utilization, and latency. Thus, the proposed framework exhibits a feasible solution of RaaS to resource-constrained mobile devices by exploiting edge computing.


2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Hoang-Nam Pham-Nguyen ◽  
Quang Tran-Minh

A huge amount of smart devices which have capacity of computing, storage, and communication to each other brings forth fog computing paradigm. Fog computing is a model in which the system tries to push data processing from cloud servers to “near” IoT devices in order to reduce latency time. The execution orderings and the deployed places of services make significant effect on the overall response time of an application. Beside new research directions in fog computing, e.g., fog-cloud collaboration, service scalability, fog scalability, mobile fog computing, fog federation, trade-off between energy consumption and communication efficiency, duration of storing data locally, storage security and communication security, and semantic-aware fog computing, the service deployment problem is one of the attractive research fields of fog computing. The service deployment is a multiobjective optimization problem; there are so many proposed solutions for various targets, such as response time, communication cost, and energy consumption. In this paper, we focus on the optimization problem which minimizes the overall response time of an application with awareness of network usage and server usage. Then, we have conducted experiments on two service deployment strategies, called cloudy and foggy strategies. We analyze numerically the overall response time, network usage, and server usage of those two strategies in order to prove the effectiveness of our proposed foggy service deployment strategy.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 323
Author(s):  
Marwa A. Abdelaal ◽  
Gamal A. Ebrahim ◽  
Wagdy R. Anis

The widespread adoption of network function virtualization (NFV) leads to providing network services through a chain of virtual network functions (VNFs). This architecture is called service function chain (SFC), which can be hosted on top of commodity servers and switches located at the cloud. Meanwhile, software-defined networking (SDN) can be utilized to manage VNFs to handle traffic flows through SFC. One of the most critical issues that needs to be addressed in NFV is VNF placement that optimizes physical link bandwidth consumption. Moreover, deploying SFCs enables service providers to consider different goals, such as minimizing the overall cost and service response time. In this paper, a novel approach for the VNF placement problem for SFCs, called virtual network functions and their replica placement (VNFRP), is introduced. It tries to achieve load balancing over the core links while considering multiple resource constraints. Hence, the VNF placement problem is first formulated as an integer linear programming (ILP) optimization problem, aiming to minimize link bandwidth consumption, energy consumption, and SFC placement cost. Then, a heuristic algorithm is proposed to find a near-optimal solution for this optimization problem. Simulation studies are conducted to evaluate the performance of the proposed approach. The simulation results show that VNFRP can significantly improve load balancing by 80% when the number of replicas is increased. Additionally, VNFRP provides more than a 54% reduction in network energy consumption. Furthermore, it can efficiently reduce the SFC placement cost by more than 67%. Moreover, with the advantages of a fast response time and rapid convergence, VNFRP can be considered as a scalable solution for large networking environments.


2019 ◽  
Vol 16 (2) ◽  
pp. 30
Author(s):  
Fakhrur Razi ◽  
Ipan Suandi ◽  
Fahmi Fahmi

The energy efficiency of mobile devices becomes very important, considering the development of mobile device technology starting to lead to smaller dimensions and with the higher processor speed of these mobile devices. Various studies have been conducted to grow energy-aware in hardware, middleware and application software. The step of optimizing energy consumption can be done at various layers of mobile communication network architecture. This study focuses on examining the energy consumption of mobile devices in the transport layer protocol, where the processor speed of the mobile devices used in this experiment is higher than the processor speed used in similar studies. The mobile device processor in this study has a speed of 1.5 GHz with 1 GHz RAM capacity. While in similar studies that have been carried out, mobile device processors have a speed of 369 MHz with a RAM capacity of less than 0.5 GHz. This study conducted an experiment in transmitting mobile data using TCP and UDP protocols. Because the video requires intensive delivery, so the video is the traffic that is being reviewed. Energy consumption is measured based on the amount of energy per transmission and the amount of energy per package. To complete the analysis, it can be seen the strengths and weaknesses of each protocol in the transport layer protocol, in this case the TCP and UDP protocols, also evaluated the network performance parameters such as delay and packet loss. The results showed that the UDP protocol consumes less energy and transmission delay compared to the TCP protocol. However, only about 22% of data packages can be transmitted. Therefore, the UDP protocol is only effective if the bit rate of data transmitted is close to the network speed. Conversely, despite consuming more energy and delay, the TCP protocol is able to transmit nearly 96% of data packets. On the other hand, when compared to mobile devices that have lower processor speeds, the mobile devices in this study consume more energy to transmit video data. However, transmission delay and packet loss can be suppressed. Thus, mobile devices that have higher processor speeds are able to optimize the energy consumed to improve transmission quality.Key words: energy consumption, processor, delay, packet loss, transport layer protocol


Author(s):  
Robin Deegan

Humans are approaching a new and intriguing time with regards to Mobile Human Computer Interaction. For years we have observed the processing power, memory capabilities and battery life of the mobile device increase exponentially. While at the same time mobile devices were converging with additional technologies such as increased connectivity, external peripherals, GPS and location based services etc. But what are the cognitive costs associated with these advancements? The software used on mobile devices is also becoming more sophisticated, demanding more from our limited mental resources. Furthermore, this complex software is being used in distracting environments such as in cars, busses, trains and noisy communal areas. These environments, themselves, have steadily become increasingly more complex and cognitively demanding. Increasingly complex software, installed on increasingly complex mobile devices, being used in increasing complex environments is presenting Mobile HCI with serious challenges. This paper presents a brief overview of five experiments before presenting a final experiment in detail. These experiments attempt to understand the relationship between cognition, distraction, usability and performance. The research determines that some distractions affect usability and not performance while others affect performance but not usability. This paper concludes with a reinforced argument for the development of a cognitive load aware system.


2021 ◽  
Author(s):  
Marzieh Ranjbar Pirbasti

Offloading heavy computations from a mobile device to cloud servers can reduce the power consumption of the mobile device and improve the response time of mobile applications. However, the gains of offloading can be significantly affected by failures of cloud servers and network links. In this thesis, we propose a fault-aware, multi-site computation offloading model capable of finding efficient allocations of tasks to resources. Our model reduces both response time and energy consumption by incorporating the effect of failures and recovery mechanisms for various offloading allocations. In addition, we create a fault-injection framework to evaluate an allocation under various failure rates and recovery mechanisms. The experiments carried out with our fault-injection framework demonstrate that our fault-aware model can determine an allocation—based on the type of failures, failure rates, and the employed recovery mechanisms—that improves both response time and lower energy consumption compared to model without failures.


Author(s):  
Osvaldo Adilson De Carvalho Junior ◽  
Sarita Mazzini Bruschi ◽  
Regina Helena Carlucci Santana ◽  
Marcos José Santana

The aim of this paper is to propose and evaluate GreenMACC (Green Metascheduler Architecture to Provide QoS in Cloud Computing), an extension of the MACC architecture (Metascheduler Architecture to provide QoS in Cloud Computing) which uses greenIT techniques to provide Quality of Service. The paper provides an evaluation of the performance of the policies in the four stages of scheduling focused on energy consumption and average response time. The results presented confirm the consistency of the proposal as it controls energy consumption and the quality of services requested by different users of a large-scale private cloud.


2021 ◽  
Vol 5 (2) ◽  
pp. 105
Author(s):  
Wasswa Shafik ◽  
S. Mojtaba Matinkhah ◽  
Mamman Nur Sanda ◽  
Fawad Shokoor

In recent years, the IoT) Internet of Things (IoT) allows devices to connect to the Internet that has become a promising research area mainly due to the constant emerging of the dynamic improvement of technologies and their associated challenges. In an approach to solve these challenges, fog computing came to play since it closely manages IoT connectivity. Fog-Enabled Smart Cities (IoT-ESC) portrays equitable energy consumption of a 7% reduction from 18.2% renewable energy contribution, which extends resource computation as a great advantage. The initialization of IoT-Enabled Smart Grids including (FESC) like fog nodes in fog computing, reduced workload in Terminal Nodes services (TNs) that are the sensors and actuators of the Internet of Things (IoT) set up. This paper proposes an integrated energy-efficiency model computation about the response time and delays service minimization delay in FESC. The FESC gives an impression of an auspicious computing model for location, time, and delay-sensitive applications supporting vertically -isolated, service delay, sensitive solicitations by providing abundant, ascendable, and scattered figuring stowage and system associativity. We first reviewed the persisting challenges in the proposed state-of-the models and based on them. We introduce a new model to address mainly energy efficiency about response time and the service delays in IoT-ESC. The iFogsim simulated results demonstrated that the proposed model minimized service delay and reduced energy consumption during computation. We employed IoT-ESC to decide autonomously or semi-autonomously whether the computation is to be made on Fog nodes or its transfer to the cloud.


2021 ◽  
Author(s):  
◽  
Jiaqi Wen

<p>In recent years, the mobile gaming industry has made rapid progress. Developers are now producing numerous mobile games with increasingly immersive graphics. However, these resource-hungry applications inevitably keep pushing well beyond the hardware limits of mobile devices. The limitations causes two main challenging issues for mobile game players. First, limited computational capabilities of smart devices are preventing rich multimedia applications from running smoothly. Second, the minuscule touchscreens impede the players from smoothly interacting with devices as they can do with PCs.   This thesis aims to address the two issues. Specifically, we implement two systems, one for the application accelerations via offloading and the other for alternative interaction approach for mobile gaming. We identify and describe the the challenging issues when developing the systems and describe our corresponding solutions.  Regarding the first system, it is well recognized the performance of GPUs on mobile devices is the bottleneck of rich multimedia mobile applications such as 3D games and virtual reality. Previous attempts to tackle the issue mainly mirgate GPU computation to servers residing in remote datacenters. However, the costly network delay is especially undesirable for highly-interactive multimedia applications since a crisp response time is critical for user experience. In this thesis, we propose GBooster, a system that accelerates GPU-intensive mobile applications by transparently offloading GPU tasks onto neighboring multimedia devices such as SmartTV and Gaming Consoles. Specifically, GBooster intercepts and redirects system graphics calls by utilizing the Dynamic Linker Hooking technique, which requires no modification of the apps and mobile systems. Besides, GBooster intelligently switches between the low-power Bluetooth and the high-bandwidth WiFi interface to reduce energy consumption of network transmissions. We implemented the GBooster on the Android system and evauluate its performance. The results demonstrate that GBooster can boost applications' frame rates by up to {85\%}. In terms of power consumption, GBooster can achieve {70\%} energy saving compared with local execution.   Second, we investigate the potential of built-in mobile device sensors to provide an alternative interaction approach for mobile gaming. We propose UbiTouch, a novel system that extends smartphones with virtual touchpads on desktops using built-in smartphone sensors. It senses a user's finger movement with a proximity and ambient light sensor whose raw sensory data from underlying hardware are strongly dependent on the finger's locations. UbiTouch maps the raw data into the finger's positions by utilizing Curvilinear Component Analysis and improve tracking accuracy via a particle filter. We have evaluate our system in three scenarios with different lighting conditions by five users. The results show that UbiTouch achieves centimetre-level localization accuracy and poses no significant impact on the battery life. We envisage that UbiTouch could support applications such as text-writing and drawing.</p>


2016 ◽  
Vol 9 (1) ◽  
pp. 90
Author(s):  
Sanjay P. Ahuja ◽  
Jesus Zambrano

<p class="zhengwen">The current proliferation of mobile systems, such as smart phones and tablets, has let to their adoption as the primary computing platforms for many users. This trend suggests that designers will continue to aim towards the convergence of functionality on a single mobile device (such as phone + mp3 player + camera + Web browser + GPS + mobile apps + sensors). However, this conjunction penalizes the mobile system both with respect to computational resources such as processor speed, memory consumption, disk capacity, and in weight, size, ergonomics and the component most important to users, battery life. Therefore, energy consumption and response time are major concerns when executing complex algorithms on mobile devices because they require significant resources to solve intricate problems.</p><p>Offloading mobile processing is an excellent solution to augment mobile capabilities by migrating computation to powerful infrastructures. Current cloud computing environments for performing complex and data intensive computation remotely are likely to be an excellent solution for offloading computation and data processing from mobile devices restricted by reduced resources. This research uses cloud computing as processing platform for intensive-computation workloads while measuring energy consumption and response times on a Samsung Galaxy S5 Android mobile phone running Android 4.1OS.</p>


Sign in / Sign up

Export Citation Format

Share Document