scholarly journals Lightweight Deep Learning based Intelligent Edge Surveillance Techniques

Author(s):  
Yu Zhao ◽  
Yue Yin ◽  
Guan Gui

Decentralized edge computing techniques have been attracted strongly attentions in many applications of intelligent internet of things (IIoT). Among these applications, intelligent edge surveillance (LEDS) techniques play a very important role to recognize object feature information automatically from surveillance video by virtue of edge computing together with image processing and computer vision. Traditional centralized surveillance techniques recognize objects at the cost of high latency, high cost and also require high occupied storage. In this paper, we propose a deep learning-based LEDS technique for a specific IIoT application. First, we introduce depthwise separable convolutional to build a lightweight neural network to reduce its computational cost. Second, we combine edge computing with cloud computing to reduce network traffic. Third, we apply the proposed LEDS technique into the practical construction site for the validation of a specific IIoT application. The detection speed of our proposed lightweight neural network reaches 16 frames per second in edge devices. After cloud server fine detection, the precision of the detection reaches 89\%. In addition, the operating cost at the edge device is only one-tenth of that of the centralized server.

2020 ◽  
Author(s):  
Yu Zhao ◽  
Yue Yin ◽  
Guan Gui

Decentralized edge computing techniques have been attracted strongly attentions in many applications of intelligent internet of things (IIoT). Among these applications, intelligent edge surveillance (LEDS) techniques play a very important role to recognize object feature information automatically from surveillance video by virtue of edge computing together with image processing and computer vision. Traditional centralized surveillance techniques recognize objects at the cost of high latency, high cost and also require high occupied storage. In this paper, we propose a deep learning-based LEDS technique for a specific IIoT application. First, we introduce depthwise separable convolutional to build a lightweight neural network to reduce its computational cost. Second, we combine edge computing with cloud computing to reduce network traffic. Third, we apply the proposed LEDS technique into the practical construction site for the validation of a specific IIoT application. The detection speed of our proposed lightweight neural network reaches 16 frames per second in edge devices. After cloud server fine detection, the precision of the detection reaches 89\%. In addition, the operating cost at the edge device is only one-tenth of that of the centralized server.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2018 ◽  
Vol 15 (2) ◽  
pp. 294-301
Author(s):  
Reddy Sreenivasulu ◽  
Chalamalasetti SrinivasaRao

Drilling is a hole making process on machine components at the time of assembly work, which are identify everywhere. In precise applications, quality and accuracy play a wide role. Nowadays’ industries suffer due to the cost incurred during deburring, especially in precise assemblies such as aerospace/aircraft body structures, marine works and automobile industries. Burrs produced during drilling causes dimensional errors, jamming of parts and misalignment. Therefore, deburring operation after drilling is often required. Now, reducing burr size is a serious topic. In this study experiments are conducted by choosing various input parameters selected from previous researchers. The effect of alteration of drill geometry on thrust force and burr size of drilled hole was investigated by the Taguchi design of experiments and found an optimum combination of the most significant input parameters from ANOVA to get optimum reduction in terms of burr size by design expert software. Drill thrust influences more on burr size. The clearance angle of the drill bit causes variation in thrust. The burr height is observed in this study.  These output results are compared with the neural network software @easy NN plus. Finally, it is concluded that by increasing the number of nodes the computational cost increases and the error in nueral network decreases. Good agreement was shown between the predictive model results and the experimental responses.  


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 128
Author(s):  
Zhenwei Guan ◽  
Feng Min ◽  
Wei He ◽  
Wenhua Fang ◽  
Tao Lu

Forest fire detection from videos or images is vital to forest firefighting. Most deep learning based approaches rely on converging image loss, which ignores the content from different fire scenes. In fact, complex content of images always has higher entropy. From this perspective, we propose a novel feature entropy guided neural network for forest fire detection, which is used to balance the content complexity of different training samples. Specifically, a larger weight is given to the feature of the sample with a high entropy source when calculating the classification loss. In addition, we also propose a color attention neural network, which mainly consists of several repeated multiple-blocks of color-attention modules (MCM). Each MCM module can extract the color feature information of fire adequately. The experimental results show that the performance of our proposed method outperforms the state-of-the-art methods.


2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


Author(s):  
R. I. Minu ◽  
G. Nagarajan

In the present-day scenario, computing is migrating from the on-premises server to the cloud server and now, progressively from the cloud to Edge server where the data is gathered from the origin point. So, the clear objective is to support the execution and unwavering quality of applications and benefits, and decrease the cost of running them, by shortening the separation information needs to travel, subsequently alleviating transmission capacity and inactivity issues. This chapter provides an insight of how the internet of things (IoT) connects with edge computing.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Nanliang Shan ◽  
Zecong Ye ◽  
Xiaolong Cui

With the development of mobile edge computing (MEC), more and more intelligent services and applications based on deep neural networks are deployed on mobile devices to meet the diverse and personalized needs of users. Unfortunately, deploying and inferencing deep learning models on resource-constrained devices are challenging. The traditional cloud-based method usually runs the deep learning model on the cloud server. Since a large amount of input data needs to be transmitted to the server through WAN, it will cause a large service latency. This is unacceptable for most current latency-sensitive and computation-intensive applications. In this paper, we propose Cogent, an execution framework that accelerates deep neural network inference through device-edge synergy. In the Cogent framework, it is divided into two operation stages, including the automatic pruning and partition stage and the containerized deployment stage. Cogent uses reinforcement learning (RL) to automatically predict pruning and partition strategies based on feedback from the hardware configuration and system conditions so that the pruned and partitioned model can better adapt to the system environment and user hardware configuration. Then through containerized deployment to the device and the edge server to accelerate model inference, experiments show that the learning-based hardware-aware automatic pruning and partition scheme can significantly reduce the service latency, and it accelerates the overall model inference process while maintaining accuracy. Using this method can accelerate up to 8.89× without loss of accuracy of more than 7%.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1446 ◽  
Author(s):  
Liang Huang ◽  
Xu Feng ◽  
Luxin Zhang ◽  
Liping Qian ◽  
Yuan Wu

This paper studies mobile edge computing (MEC) networks where multiple wireless devices (WDs) offload their computation tasks to multiple edge servers and one cloud server. Considering different real-time computation tasks at different WDs, every task is decided to be processed locally at its WD or to be offloaded to and processed at one of the edge servers or the cloud server. In this paper, we investigate low-complexity computation offloading policies to guarantee quality of service of the MEC network and to minimize WDs’ energy consumption. Specifically, both a linear programing relaxation-based (LR-based) algorithm and a distributed deep learning-based offloading (DDLO) algorithm are independently studied for MEC networks. We further propose a heterogeneous DDLO to achieve better convergence performance than DDLO. Extensive numerical results show that the DDLO algorithms guarantee better performance than the LR-based algorithm. Furthermore, the DDLO algorithm generates an offloading decision in less than 1 millisecond, which is several orders faster than the LR-based algorithm.


Forecasting ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 1-25
Author(s):  
Thabang Mathonsi ◽  
Terence L. van Zyl

Hybrid methods have been shown to outperform pure statistical and pure deep learning methods at forecasting tasks and quantifying the associated uncertainty with those forecasts (prediction intervals). One example is Exponential Smoothing Recurrent Neural Network (ES-RNN), a hybrid between a statistical forecasting model and a recurrent neural network variant. ES-RNN achieves a 9.4% improvement in absolute error in the Makridakis-4 Forecasting Competition. This improvement and similar outperformance from other hybrid models have primarily been demonstrated only on univariate datasets. Difficulties with applying hybrid forecast methods to multivariate data include (i) the high computational cost involved in hyperparameter tuning for models that are not parsimonious, (ii) challenges associated with auto-correlation inherent in the data, as well as (iii) complex dependency (cross-correlation) between the covariates that may be hard to capture. This paper presents Multivariate Exponential Smoothing Long Short Term Memory (MES-LSTM), a generalized multivariate extension to ES-RNN, that overcomes these challenges. MES-LSTM utilizes a vectorized implementation. We test MES-LSTM on several aggregated coronavirus disease of 2019 (COVID-19) morbidity datasets and find our hybrid approach shows consistent, significant improvement over pure statistical and deep learning methods at forecast accuracy and prediction interval construction.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2958
Author(s):  
Timotej Knez ◽  
Octavian Machidon ◽  
Veljko Pejović

Edge intelligence is currently facing several important challenges hindering its performance, with the major drawback being meeting the high resource requirements of deep learning by the resource-constrained edge computing devices. The most recent adaptive neural network compression techniques demonstrated, in theory, the potential to facilitate the flexible deployment of deep learning models in real-world applications. However, their actual suitability and performance in ubiquitous or edge computing applications has not, to this date, been evaluated. In this context, our work aims to bridge the gap between the theoretical resource savings promised by such approaches and the requirements of a real-world mobile application by introducing algorithms that dynamically guide the compression rate of a neural network according to the continuously changing context in which the mobile computation is taking place. Through an in-depth trace-based investigation, we confirm the feasibility of our adaptation algorithms in offering a scalable trade-off between the inference accuracy and resource usage. We then implement our approach on real-world edge devices and, through a human activity recognition application, confirm that it offers efficient neural network compression adaptation in highly dynamic environments. The results of our experiment with 21 participants show that, compared to using static network compression, our approach uses 2.18× less energy with only a 1.5% drop in the average accuracy of the classification.


Sign in / Sign up

Export Citation Format

Share Document