Exploring compression and parallelization techniques for distribution of deep neural networks over Edge–Fog continuum – a review

2020 ◽  
Vol 13 (3) ◽  
pp. 331-364
Author(s):  
Azra Nazir ◽  
Roohie Naaz Mir ◽  
Shaima Qureshi

PurposeThe trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.Design/methodology/approachThis review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.FindingsDL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.Originality/valueTo the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

2017 ◽  
Vol 117 (9) ◽  
pp. 1890-1905 ◽  
Author(s):  
Yingfeng Zhang ◽  
Lin Zhao ◽  
Cheng Qian

Purpose The huge demand for fresh goods has stimulated lots of research on the perishable food supply chain. The characteristics of perishable food and the cross-regional transportation have brought many challenges to the operation models of perishable food supply chain. The purpose of this paper is to address these challenges based on the real-time data acquired by the Internet of Things (IoT) devices. Design/methodology/approach IoT and the modeling of the Supply Hub in Industrial Parks were adopted in the perishable food supply chain. Findings A conceptual model was established for the IoT-enabled perishable food supply chain with two-echelon supply hubs. The performance of supply chain has improved when implementing the proposed model, as is demonstrated by a case study. Originality/value By our model, the supply hubs which act as the dominators of the supply chain can respond to the real-time information captured from the operation processes of an IoT-enabled supply chain, thus to provide public warehousing and logistic services.


2016 ◽  
Vol 24 (4) ◽  
pp. 298-325 ◽  
Author(s):  
Abdelaziz Amara Korba ◽  
Mehdi Nafaa ◽  
Salim Ghanemi

Purpose Wireless multi-hop ad hoc networks are becoming very attractive and widely deployed in many kinds of communication and networking applications. However, distributed and collaborative routing in such networks makes them vulnerable to various security attacks. This paper aims to design and implement a new efficient intrusion detection and prevention framework, called EIDPF, a host-based framework suitable for mobile ad hoc network’s characteristics such as high node’s mobility, resource-constraints and rapid topology change. EIDPF aims to protect an AODV-based network against routing attacks that could target such network. Design/methodology/approach This detection and prevention framework is composed of three complementary modules: a specification-based intrusion detection system to detect attacks violating the protocol specification, a load balancer to prevent fast-forwarding attacks such as wormhole and rushing and adaptive response mechanism to isolate malicious node from the network. Findings A key advantage of the proposed framework is its capacity to efficiently avoid fast-forwarding attacks and its real-time detection of both known and unknown attacks violating specification. The simulation results show that EIDPF exhibits a high detection rate, low false positive rate and no extra communication overhead compared to other protection mechanisms. Originality/value It is a new intrusion detection and prevention framework to protect ad hoc network against routing attacks. A key strength of the proposed framework is its ability to guarantee a real-time detection of known and unknown attacks that violate the protocol specification, and avoiding wormhole and rushing attacks by providing a load balancing route discovery.


1997 ◽  
Vol 36 (8-9) ◽  
pp. 19-24 ◽  
Author(s):  
Richard Norreys ◽  
Ian Cluckie

Conventional UDS models are mechanistic which though appropriate for design purposes are less well suited to real-time control because they are slow running, difficult to calibrate, difficult to re-calibrate in real time and have trouble handling noisy data. At Salford University a novel hybrid of dynamic and empirical modelling has been developed, to combine the speed of the empirical model with the ability to simulate complex and non-linear systems of the mechanistic/dynamic models. This paper details the ‘knowledge acquisition module’ software and how it has been applied to construct a model of a large urban drainage system. The paper goes on to detail how the model has been linked with real-time radar data inputs from the MARS c-band radar.


2017 ◽  
Vol 27 (6) ◽  
pp. 1249-1265 ◽  
Author(s):  
Yijun Liu ◽  
Guiyong Zhang ◽  
Huan Lu ◽  
Zhi Zong

Purpose Due to the strong reliance on element quality, there exist some inherent shortcomings of the traditional finite element method (FEM). The model of FEM behaves overly stiff, and the solutions of automated generated linear elements are generally of poor accuracy about especially gradient results. The proposed cell-based smoothed point interpolation method (CS-PIM) aims to improve the results accuracy of the thermoelastic problems via properly softening the overly-stiff stiffness. Design/methodology/approach This novel approach is based on the newly developed G space and weakened weak (w2) formulation, and of which shape functions are created using the point interpolation method and the cell-based gradient smoothing operation is conducted based on the linear triangular background cells. Findings Owing to the property of softened stiffness, the present method can generally achieve better accuracy and higher convergence results (especially for the temperature gradient and thermal stress solutions) than the FEM does by using the simplest linear triangular background cells, which has been examined by extensive numerical studies. Practical implications The CS-PIM is capable of producing more accurate results of temperature gradients as well as thermal stresses with the automated generated and unstructured background cells, which make it a better candidate for solving practical thermoelastic problems. Originality/value It is the first time that the novel CS-PIM was further developed for solving thermoelastic problems, which shows its tremendous potential for practical implications.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
E. Bertino ◽  
M. R. Jahanshahi ◽  
A. Singla ◽  
R.-T. Wu

AbstractThis paper addresses the problem of efficient and effective data collection and analytics for applications such as civil infrastructure monitoring and emergency management. Such problem requires the development of techniques by which data acquisition devices, such as IoT devices, can: (a) perform local analysis of collected data; and (b) based on the results of such analysis, autonomously decide further data acquisition. The ability to perform local analysis is critical in order to reduce the transmission costs and latency as the results of an analysis are usually smaller in size than the original data. As an example, in case of strict real-time requirements, the analysis results can be transmitted in real-time, whereas the actual collected data can be uploaded later on. The ability to autonomously decide about further data acquisition enhances scalability and reduces the need of real-time human involvement in data acquisition processes, especially in contexts with critical real-time requirements. The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices. We also discuss approaches based on machine learning reinforcement techniques enhancing the autonomy of IoT devices.


Author(s):  
Jaber Almutairi ◽  
Mohammad Aldossary

AbstractRecently, the number of Internet of Things (IoT) devices connected to the Internet has increased dramatically as well as the data produced by these devices. This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing. Although Edge Computing is a promising enabler for latency-sensitive related issues, its deployment produces new challenges. Besides, different service architectures and offloading strategies have a different impact on the service time performance of IoT applications. Therefore, this paper presents a novel approach for task offloading in an Edge-Cloud system in order to minimize the overall service time for latency-sensitive applications. This approach adopts fuzzy logic algorithms, considering application characteristics (e.g., CPU demand, network demand and delay sensitivity) as well as resource utilization and resource heterogeneity. A number of simulation experiments are conducted to evaluate the proposed approach with other related approaches, where it was found to improve the overall service time for latency-sensitive applications and utilize the edge-cloud resources effectively. Also, the results show that different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.


Author(s):  
Brij B. Gupta ◽  
Krishna Yadav ◽  
Imran Razzak ◽  
Konstantinos Psannis ◽  
Arcangelo Castiglione ◽  
...  

Author(s):  
Rakesh Kumar ◽  
Gaurav Dhiman ◽  
Neeraj Kumar ◽  
Rajesh Kumar Chandrawat ◽  
Varun Joshi ◽  
...  

AbstractThis article offers a comparative study of maximizing and modelling production costs by means of composite triangular fuzzy and trapezoidal FLPP. It also outlines five different scenarios of instability and has developed realistic models to minimize production costs. Herein, the first attempt is made to examine the credibility of optimized cost via two different composite FLP models, and the results were compared with its extension, i.e., the trapezoidal FLP model. To validate the models with real-time phenomena, the Production cost data of Rail Coach Factory (RCF) Kapurthala has been taken. The lower, static, and upper bounds have been computed for each situation, and then systems of optimized FLP are constructed. The credibility of each model of composite-triangular and trapezoidal FLP concerning all situations has been obtained, and using this membership grade, the minimum and the greatest minimum costs have been illustrated. The performance of each composite-triangular FLP model was compared to trapezoidal FLP models, and the intense effects of trapezoidal on composite fuzzy LPP models are investigated.


Author(s):  
Negin Yousefpour ◽  
Steve Downie ◽  
Steve Walker ◽  
Nathan Perkins ◽  
Hristo Dikanski

Bridge scour is a challenge throughout the U.S.A. and other countries. Despite the scale of the issue, there is still a substantial lack of robust methods for scour prediction to support reliable, risk-based management and decision making. Throughout the past decade, the use of real-time scour monitoring systems has gained increasing interest among state departments of transportation across the U.S.A. This paper introduces three distinct methodologies for scour prediction using advanced artificial intelligence (AI)/machine learning (ML) techniques based on real-time scour monitoring data. Scour monitoring data included the riverbed and river stage elevation time series at bridge piers gathered from various sources. Deep learning algorithms showed promising in prediction of bed elevation and water level variations as early as a week in advance. Ensemble neural networks proved successful in the predicting the maximum upcoming scour depth, using the observed sensor data at the onset of a scour episode, and based on bridge pier, flow and riverbed characteristics. In addition, two of the common empirical scour models were calibrated based on the observed sensor data using the Bayesian inference method, showing significant improvement in prediction accuracy. Overall, this paper introduces a novel approach for scour risk management by integrating emerging AI/ML algorithms with real-time monitoring systems for early scour forecast.


Sign in / Sign up

Export Citation Format

Share Document