Inference Acceleration Model of Branched Neural Network Based on Distributed Deployment in Fog Computing

Author(s):  
Weijin Jiang ◽  
Sijian Lv
2020 ◽  
Vol 13 (3) ◽  
pp. 261-282
Author(s):  
Mohammad Khalid Pandit ◽  
Roohie Naaz Mir ◽  
Mohammad Ahsan Chishti

PurposeThe intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.Design/methodology/approachTo realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.FindingsExperimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.Originality/valueThe proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.


Electronics ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 455 ◽  
Author(s):  
Samuel Kofi Erskine ◽  
Khaled M. Elleithy

VANET (vehicular ad hoc network) has a main objective to improve driver safety and traffic efficiency. The intermittent exchange of real-time safety message delivery in VANET has become an urgent concern due to DoS (denial of service) and smart and normal intrusions (SNI) attacks. The intermittent communication of VANET generates huge amount of data which requires typical storage and intelligence infrastructure. Fog computing (FC) plays an important role in storage, computation, and communication needs. In this research, fog computing (FC) integrates with hybrid optimization algorithms (OAs) including the Cuckoo search algorithm (CSA), firefly algorithm (FA), firefly neural network, and the key distribution establishment (KDE) for authenticating both the network level and the node level against all attacks for trustworthiness in VANET. The proposed scheme is termed “Secure Intelligent Vehicular Network using fog computing” (SIVNFC). A feedforward back propagation neural network (FFBP-NN), also termed the firefly neural, is used as a classifier to distinguish between the attacking vehicles and genuine vehicles. The SIVNFC scheme is compared with the Cuckoo, the FA, and the firefly neural network to evaluate the quality of services (QoS) parameters such as jitter and throughput.


Author(s):  
Shivi Sharma ◽  
Hemraj Saini

: With the fast development of cloud computing methods, exponential growth is faced by number of users. It is complex for traditional data centres for performing number of jobs in real time because of inadequate resources bandwidth. Therefore, the method of fog computing is recommended for supporting and for providing fast cloud services. It is not a substitute but is a powerful complement of cloud computing. Reduction of energy consumption through the notion of fog computing has certainly been a challenge for the current researcher ,industry and community. Various industries including finance and health care do require a rich resource based platform for the purpose of processing large amount of data with cloud computing across fog architecture. The consumption of energy across fog servers relies on allocating techniques for services (user requests).It facilitates processing at the edge with the probability to interact with cloud. This article has proposed energy aware scheduling by using Artificial neural network (ANN) and Modified multi objective job scheduling (MMJS) techniques. The emphasis of the work is on reduction of energy consumption rate with less Service level agreement (SLA) violation in fog computing for data centres. The result shows that there is 3.9% reduction in SLA Violation when multi-objective function with ANN is applied.


Energies ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 1217 ◽  
Author(s):  
İsmail ÇAVDAR ◽  
Vahid FARYAD

Energy management technology of demand-side is a key process of the smart grid that helps achieve a more efficient use of generation assets by reducing the energy demand of users during peak loads. In the context of a smart grid and smart metering, this paper proposes a hybrid model of energy disaggregation through deep feature learning for non-intrusive load monitoring to classify home appliances based on the information of main meters. In addition, a deep neural model of supervised energy disaggregation with a high accuracy for giving awareness to end users and generating detailed feedback from demand-side with no need for expensive smart outlet sensors was introduced. A new functional API model of deep learning (DL) based on energy disaggregation was designed by combining a one-dimensional convolutional neural network and recurrent neural network (1D CNN-RNN). The proposed model was trained on Google Colab’s Tesla graphics processing unit (GPU) using Keras. The residential energy disaggregation dataset was used for real households and was implemented in Tensorflow backend. Three different disaggregation methods were compared, namely the convolutional neural network, 1D CNN-RNN, and long short-term memory. The results showed that energy can be disaggregated from the metrics very accurately using the proposed 1D CNN-RNN model. Finally, as a work in progress, we introduced the DL on the Edge for Fog Computing non-intrusive load monitoring (NILM) on a low-cost embedded board using a state-of-the-art inference library called uTensor that can support any Mbed enabled board with no need for the DL API of web services and internet connectivity.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Chao Wang ◽  
Bailing Wang ◽  
Hongri Liu ◽  
Haikuo Qu

As the Industrial Internet of Things (IIoT) develops rapidly, cloud computing and fog computing become effective measures to solve some problems, e.g., limited computing resources and increased network latency. The Industrial Control Systems (ICS) play a key factor within the development of IIoT, whose security affects the whole IIoT. ICS involves many aspects, like water supply systems and electric utilities, which are closely related to people’s lives. ICS is connected to the Internet and exposed in the cyberspace instead of isolating with the outside recent years. The risk of being attacked increases as a result. In order to protect these assets, intrusion detection systems (IDS) have drawn much attention. As one kind of intrusion detection, anomaly detection provides the ability to detect unknown attacks compared with signature-based techniques, which are another kind of IDS. In this paper, an anomaly detection method with a composite autoencoder model learning the normal pattern is proposed. Unlike the common autoencoder neural network that predicts or reconstructs data separately, our model makes prediction and reconstruction on input data at the same time, which overcomes the shortcoming of using each one alone. With the error obtained by the model, a change ratio is put forward to locate the most suspicious devices that may be under attack. In the last part, we verify the performance of our method by conducting experiments on the SWaT dataset. The results show that the proposed method exhibits improved performance with 88.5% recall and 87.0% F1-score.


Sign in / Sign up

Export Citation Format

Share Document