Deep Learning With Analytics on Edge

Author(s):  
Kavita Srivastava

The steep rise in autonomous systems and the internet of things in recent years has influenced the way in which computation has performed. With built-in AI (artificial intelligence) in IoT and cyber-physical systems, the need for high-performance computing has emerged. Cloud computing is no longer sufficient for the sensor-driven systems which continuously keep on collecting data from the environment. The sensor-based systems such as autonomous vehicles require analysis of data and predictions in real-time which is not possible only with the centralized cloud. This scenario has given rise to a new computing paradigm called edge computing. Edge computing requires the storage of data, analysis, and prediction performed on the network edge as opposed to a cloud server thereby enabling quick response and less storage overhead. The intelligence at the edge can be obtained through deep learning. This chapter contains information about various deep learning frameworks, hardware, and systems for edge computing and examples of deep neural network training using the Caffe 2 framework.

Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1446 ◽  
Author(s):  
Liang Huang ◽  
Xu Feng ◽  
Luxin Zhang ◽  
Liping Qian ◽  
Yuan Wu

This paper studies mobile edge computing (MEC) networks where multiple wireless devices (WDs) offload their computation tasks to multiple edge servers and one cloud server. Considering different real-time computation tasks at different WDs, every task is decided to be processed locally at its WD or to be offloaded to and processed at one of the edge servers or the cloud server. In this paper, we investigate low-complexity computation offloading policies to guarantee quality of service of the MEC network and to minimize WDs’ energy consumption. Specifically, both a linear programing relaxation-based (LR-based) algorithm and a distributed deep learning-based offloading (DDLO) algorithm are independently studied for MEC networks. We further propose a heterogeneous DDLO to achieve better convergence performance than DDLO. Extensive numerical results show that the DDLO algorithms guarantee better performance than the LR-based algorithm. Furthermore, the DDLO algorithm generates an offloading decision in less than 1 millisecond, which is several orders faster than the LR-based algorithm.


Author(s):  
Yoichiro Maeda ◽  
Kotaro Sano ◽  
Eric W. Cooper ◽  
Katsuari Kamei ◽  
◽  
...  

In recent years, much research on the unmanned control of a moving vehicle has been conducted, and various robots and motor vehicles moving automatically are being used. However, the more complicated the environment is, the more difficult it is for the autonomous vehicles to move automatically. Even in such a challenging environment, however, an expert with the necessary operation skill can sometimes perform the appropriate control of the moving vehicle. In this research, a method for learning a human’s operation skill using a convolutional neural network (CNN) and setting visual information for input is proposed for learning more complicated environmental information. A CNN is a kind of deep-learning network, and it exhibits high performance in the field of image recognition. In this experiment, the operation knowledge was also visualized using a fuzzy neural network with obtained input-output maps to create fuzzy rules. To verify the effectiveness of this method, an experiment involving operation skill acquisition by some subjects using a drone control simulator was conducted.


2020 ◽  
Vol 67 ◽  
pp. 285-325
Author(s):  
William Cohen ◽  
Fan Yang ◽  
Kathryn Rivard Mazaitis

We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for tuning the parameters of a probabilistic logic. The integration with these frameworks enables use of GPU-based parallel processors for inference and learning, making TensorLog the first highly parallellizable probabilistic logic. Experimental results show that TensorLog scales to problems involving hundreds of thousands of knowledge-base triples and tens of thousands of examples.


2020 ◽  
Vol 10 (16) ◽  
pp. 5426 ◽  
Author(s):  
Qiang Liu ◽  
Haidong Zhang ◽  
Yiming Xu ◽  
Li Wang

Recently, deep learning frameworks have been deployed in visual odometry systems and achieved comparable results to traditional feature matching based systems. However, most deep learning-based frameworks inevitably need labeled data as ground truth for training. On the other hand, monocular odometry systems are incapable of restoring absolute scale. External or prior information has to be introduced for scale recovery. To solve these problems, we present a novel deep learning-based RGB-D visual odometry system. Our two main contributions are: (i) during network training and pose estimation, the depth images are fed into the network to form a dual-stream structure with the RGB images, and a dual-stream deep neural network is proposed. (ii) the system adopts an unsupervised end-to-end training method, thus the labor-intensive data labeling task is not required. We have tested our system on the KITTI dataset, and results show that the proposed RGB-D Visual Odometry (VO) system has obvious advantages over other state-of-the-art systems in terms of both translation and rotation errors.


Energies ◽  
2022 ◽  
Vol 15 (2) ◽  
pp. 452
Author(s):  
Nour Alhuda Sulieman ◽  
Lorenzo Ricciardi Celsi ◽  
Wei Li ◽  
Albert Zomaya ◽  
Massimo Villari

Edge computing is a distributed computing paradigm such that client data are processed at the periphery of the network, as close as possible to the originating source. Since the 21st century has come to be known as the century of data due to the rapid increase in the quantity of exchanged data worldwide (especially in smart city applications such as autonomous vehicles), collecting and processing such data from sensors and Internet of Things devices operating in real time from remote locations and inhospitable operating environments almost anywhere in the world is a relevant emerging need. Indeed, edge computing is reshaping information technology and business computing. In this respect, the paper is aimed at providing a comprehensive overview of what edge computing is as well as the most relevant edge use cases, tradeoffs, and implementation considerations. In particular, this review article is focused on highlighting (i) the most recent trends relative to edge computing emerging in the research field and (ii) the main businesses that are taking operations at the edge as well as the most used edge computing platforms (both proprietary and open source). First, the paper summarizes the concept of edge computing and compares it with cloud computing. After that, we discuss the challenges of optimal server placement, data security in edge networks, hybrid edge-cloud computing, simulation platforms for edge computing, and state-of-the-art improved edge networks. Finally, we explain the edge computing applications to 5G/6G networks and industrial internet of things. Several studies review a set of attractive edge features, system architectures, and edge application platforms that impact different industry sectors. The experimental results achieved in the cited works are reported in order to prove how edge computing improves the efficiency of Internet of Things networks. On the other hand, the work highlights possible vulnerabilities and open issues emerging in the context of edge computing architectures, thus proposing future directions to be investigated.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5035 ◽  
Author(s):  
Son ◽  
Jeong ◽  
Lee

When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 896
Author(s):  
Jeongsoo Park ◽  
Jungrae Kim ◽  
Jong Hwan Ko

Due to limited resources of the Internet of Things (IoT) edge devices, deep neural network (DNN) inference requires collaboration with cloud server platforms, where DNN inference is partitioned and offloaded to high-performance servers to reduce end-to-end latency. As data-intensive intermediate feature space at the partitioned layer should be transmitted to the servers, efficient compression of the feature space is imperative for high-throughput inference. However, the feature space at deeper layers has different characteristics than natural images, limiting the compression performance by conventional preprocessing and encoding techniques. To tackle this limitation, we introduce a new method for compressing DNN intermediate feature space using a specialized autoencoder, called auto-tiler. The proposed auto-tiler is designed to include the tiling process and provide multiple input/output dimensions to support various partitioned layers and compression ratios. The results show that auto-tiler achieves 18% to 67% higher percent point accuracy compared to the existing methods at the same bitrate while reducing the process latency by 73% to 81%. The dimension variability of an auto-tiler also reduces the storage overhead by 62% with negligible accuracy loss.


Sign in / Sign up

Export Citation Format

Share Document