scholarly journals An Adaptive Offloading Method for an IoT-Cloud Converged Virtual Machine System Using a Hybrid Deep Neural Network

2018 ◽  
Vol 10 (11) ◽  
pp. 3955 ◽  
Author(s):  
Yunsik Son ◽  
Junho Jeong ◽  
YangSun Lee

A virtual machine with a conventional offloading scheme transmits and receives all context information to maintain program consistency during communication between local environments and the cloud server environment. Most overhead costs incurred during offloading are proportional to the size of the context information transmitted over the network. Therefore, the existing context information synchronization structure transmits context information that is not required for job execution when offloading, which increases the overhead costs of transmitting context information in low-performance Internet-of-Things (IoT) devices. In addition, the optimal offloading point should be determined by checking the server’s CPU usage and network quality. In this study, we propose a context management method and estimation method for CPU load using a hybrid deep neural network on a cloud-based offloading service that extracts contexts that require synchronization through static profiling and estimation. The proposed adaptive offloading method reduces network communication overheads and determines the optimal offloading time for low-computing-powered IoT devices and variable server performance. Using experiments, we verify that the proposed learning-based prediction method effectively estimates the CPU load model for IoT devices and can adaptively apply offloading according to the load of the server.

Author(s):  
Mostafa H. Tawfeek ◽  
Karim El-Basyouny

Safety Performance Functions (SPFs) are regression models used to predict the expected number of collisions as a function of various traffic and geometric characteristics. One of the integral components in developing SPFs is the availability of accurate exposure factors, that is, annual average daily traffic (AADT). However, AADTs are not often available for minor roads at rural intersections. This study aims to develop a robust AADT estimation model using a deep neural network. A total of 1,350 rural four-legged, stop-controlled intersections from the Province of Alberta, Canada, were used to train the neural network. The results of the deep neural network model were compared with the traditional estimation method, which uses linear regression. The results indicated that the deep neural network model improved the estimation of minor roads’ AADT by 35% when compared with the traditional method. Furthermore, SPFs developed using linear regression resulted in models with statistically insignificant AADTs on minor roads. Conversely, the SPF developed using the neural network provided a better fit to the data with both AADTs on minor and major roads being statistically significant variables. The findings indicated that the proposed model could enhance the predictive power of the SPF and therefore improve the decision-making process since SPFs are used in all parts of the safety management process.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Jun Guo ◽  
Shu Liu ◽  
Bin Zhang ◽  
Yongming Yan

Cloud application provides access to large pool of virtual machines for building high-quality applications to satisfy customers’ requirements. A difficult issue is how to predict virtual machine response time because it determines when we could adjust dynamic scalable virtual machines. To address the critical issue, this paper proposes a prediction virtual machine response time method which is based on genetic algorithm-back propagation (GA-BP) neural network. First of all, we predict component response time by the past virtual machine component usage experience data: the number of concurrent requests and response time. Then, we could predict virtual machines service response time. The results of large-scale experiments show the effectiveness and feasibility of our method.


2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


2018 ◽  
Vol 2018 (13) ◽  
pp. 194-1-194-6
Author(s):  
Koichi Taguchi ◽  
Manabu Hashimoto ◽  
Kensuke Tobitani ◽  
Noriko Nagata

Author(s):  
Mohammad Khalid Pandit ◽  
Roohie Naaz Mir ◽  
Mohammad Ahsan Chishti

Background: Deep neural networks have become the state of the art technology for real- world classification tasks due to their ability to learn better feature representations at each layer. However, the added accuracy that is associated with the deeper layers comes at a huge cost of computation, energy and added latency. Objective: The implementations of such architectures in resource constraint IoT devices are computationally prohibitive due to its computational and memory requirements. These factors are particularly severe in IoT domain. In this paper, we propose the Adaptive Deep Neural Network (ADNN) which gets split across the compute hierarchical layers i.e. edge, fog and cloud with all splits having one or more exit locations. Methods: At every location, the data sample adaptively chooses to exit from the NN (based on confidence criteria) or get fed into deeper layers housed across different compute layers. Design of ADNN, an adaptive deep neural network which results in fast and energy- efficient decision making (inference). : Joint optimization of all the exit points in ADNN such that the overall loss is minimized. Results: Experiments on MNIST dataset show that 41.9% of samples exit at the edge location (correctly classified) and 49.7% of samples exit at fog layer. Similar results are obtained on fashion MNIST dataset with only 19.4% of the samples requiring the entire neural network layers. With this architecture, most of the data samples are locally processed and classified while maintaining the classification accuracy and also keeping in check the communication, energy and latency requirements for time sensitive IoT applications. Conclusion: We investigated the approach of distributing the layers of the deep neural network across edge, fog and the cloud computing devices wherein data samples adaptively choose the exit points to classify themselves based on the confidence criteria (threshold). The results show that the majority of the data samples are classified within the private network of the user (edge, fog) while only a few samples require the entire layers of ADNN for classification.


Sign in / Sign up

Export Citation Format

Share Document