scholarly journals Analytical Study of Task Offloading Techniques using Deep Learning

Author(s):  
Mr Almelu ◽  
Dr. S. Veenadhari ◽  
Kamini Maheshwar

The Internet of Things (IoT) systems create a large amount of sensing information. The consistency of this information is an essential problem for ensuring the quality of IoT services. The IoT data, however, generally suffers due to a variety of factors such as collisions, unstable network communication, noise, manual system closure, incomplete values and equipment failure. Due to excessive latency, bandwidth limitations, and high communication costs, transferring all IoT data to the cloud to solve the missing data problem may have a detrimental impact on network performance and service quality. As a result, the issue of missing information should be addressed as soon as feasible by offloading duties like data prediction or estimations closer to the source. As a result, the issue of incomplete information must be addressed as soon as feasible by offloading duties such as predictions or assessment to the network’s edge devices. In this work, we show how deep learning may be used to offload tasks in IoT applications.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Balamurugan Souprayen ◽  
Ayyasamy Ayyanar ◽  
Suresh Joseph K

PurposeThe purpose of the food traceability is used to retain the good quality of raw material supply, diminish the loss and reduced system complexity.Design/methodology/approachThe proposed hybrid algorithm is for food traceability to make accurate predictions and enhanced period data. The operation of the internet of things is addressed to track and trace the food quality to check the data acquired from manufacturers and consumers.FindingsIn order to survive with the existing financial circumstances and the development of global food supply chain, the authors propose efficient food traceability techniques using the internet of things and obtain a solution for data prediction.Originality/valueThe operation of the internet of things is addressed to track and trace the food quality to check the data acquired from manufacturers and consumers. The experimental analysis depicts that proposed algorithm has high accuracy rate, less execution time and error rate.


2021 ◽  
Author(s):  
Michael C. Welle ◽  
Anastasiia Varava ◽  
Jeffrey Mahler ◽  
Ken Goldberg ◽  
Danica Kragic ◽  
...  

AbstractCaging grasps limit the mobility of an object to a bounded component of configuration space. We introduce a notion of partial cage quality based on maximal clearance of an escaping path. As computing this is a computationally demanding task even in a two-dimensional scenario, we propose a deep learning approach. We design two convolutional neural networks and construct a pipeline for real-time planar partial cage quality estimation directly from 2D images of object models and planar caging tools. One neural network, CageMaskNN, is used to identify caging tool locations that can support partial cages, while a second network that we call CageClearanceNN is trained to predict the quality of those configurations. A partial caging dataset of 3811 images of objects and more than 19 million caging tool configurations is used to train and evaluate these networks on previously unseen objects and caging tool configurations. Experiments show that evaluation of a given configuration on a GeForce GTX 1080 GPU takes less than 6 ms. Furthermore, an additional dataset focused on grasp-relevant configurations is curated and consists of 772 objects with 3.7 million configurations. We also use this dataset for 2D Cage acquisition on novel objects. We study how network performance depends on the datasets, as well as how to efficiently deal with unevenly distributed training data. In further analysis, we show that the evaluation pipeline can approximately identify connected regions of successful caging tool placements and we evaluate the continuity of the cage quality score evaluation along caging tool trajectories. Influence of disturbances is investigated and quantitative results are provided.


2021 ◽  
Author(s):  
Saniya Zafar ◽  
sobia Jangsher ◽  
Arafat Al-Dweik

The deployment of mobile-Small cells (mScs) is widely adopted to intensify the quality-of-service (QoS) in high mobility vehicles. However, the rapidly varying interference patterns among densely deployed mScs make the resource allocation (RA) highly challenging. In such scenarios, RA problem needs to be solved nearly in real-time, which can be considered as drawback for most existing RA algorithms. To overcome this constraint and solve the RA problem efficiently, we use deep learning (DL) in this work due to its ability to leverage the historical data in RA problem and to deal with computationally expensive tasks offline. More specifically, this paper considers the RA problem in vehicular environment comprising of city buses, where DL is explored for optimization of network performance. Simulation results reveal that RA in a network using Long Short-Term Memory (LSTM) algorithm outperforms other machine learning (ML) and DL-based RA mechanisms. Moreover, RA using LSTM provides less accurate results as compared to existing Time Interval Dependent Interference Graph (TIDIG)-based, and Threshold Percentage Dependent Interference Graph (TPDIG)-based RA but shows improved results when compared to RA using Global Positioning System Dependent Interference Graph (GPSDIG). However, the proposed scheme is computationally less expensive in comparison with TIDIG and TPDIG-based algorithms.


Author(s):  
Balamurugan Souprayen ◽  
Ayyasamy Ayyanar ◽  
Suresh Joseph K

In order to survive with the existing financial circumstances and the development of global food supply chain, the authors propose efficient food traceability techniques using the internet of things and obtain a solution for data prediction. The purpose of the food traceability is used to retain the good quality of raw material supply, diminish the loss, and reduce system complexity. The primary issue is to tackle current limitations to prevent food defects from exceeding hazardous levels and to inform the safety measures to the customers. The proposed hybrid algorithm is for food traceability to make accurate predictions and enhanced period data. The operation of the internet of things is addressed to track and trace the food quality to check the data acquired from manufacturers and consumers. The experimental analysis depicts that proposed algorithm has high accuracy rate, less execution time and error rate.


2021 ◽  
Author(s):  
Saniya Zafar ◽  
sobia Jangsher ◽  
Arafat Al-Dweik

The deployment of mobile-Small cells (mScs) is widely adopted to intensify the quality-of-service (QoS) in high mobility vehicles. However, the rapidly varying interference patterns among densely deployed mScs make the resource allocation (RA) highly challenging. In such scenarios, RA problem needs to be solved nearly in real-time, which can be considered as drawback for most existing RA algorithms. To overcome this constraint and solve the RA problem efficiently, we use deep learning (DL) in this work due to its ability to leverage the historical data in RA problem and to deal with computationally expensive tasks offline. More specifically, this paper considers the RA problem in vehicular environment comprising of city buses, where DL is explored for optimization of network performance. Simulation results reveal that RA in a network using Long Short-Term Memory (LSTM) algorithm outperforms other machine learning (ML) and DL-based RA mechanisms. Moreover, RA using LSTM provides less accurate results as compared to existing Time Interval Dependent Interference Graph (TIDIG)-based, and Threshold Percentage Dependent Interference Graph (TPDIG)-based RA but shows improved results when compared to RA using Global Positioning System Dependent Interference Graph (GPSDIG). However, the proposed scheme is computationally less expensive in comparison with TIDIG and TPDIG-based algorithms.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-17
Author(s):  
Danfeng Sun ◽  
Jia Wu ◽  
Jian Yang ◽  
Huifeng Wu

The merging boundaries between edge computing and deep learning are forging a new blueprint for the Internet of Things (IoT). However, the low-quality of data in many IoT platforms, especially those composed of heterogeneous devices, is hindering the development of high-quality applications for those platforms. The solution presented in this article is intelligent data collaboration, i.e., the concept of deep learning providing IoT with the ability to adaptively collaborate to accomplish a task. Here, we outline the concept of intelligent data collaboration in detail and present a mathematical model in general form. To demonstrate one possible case where intelligent data collaboration would be useful, we prepared an implementation called adaptive data cleaning (ADC), designed to filter noisy data out of temperature readings in an IoT base station network. ADC primarily consists of a denoising autoencoder LSTM for predictions and a four-level data processing mechanism to perform the filtering. Comparisons between ADC and a maximum slop method show ADC with the lowest false error and the best filtering rates.


2015 ◽  
Vol 14 (6) ◽  
pp. 5809-5813
Author(s):  
Abhishek Prabhakar ◽  
Amod Tiwari ◽  
Vinay Kumar Pathak

Wireless security is the prevention of unauthorized access to computers using wireless networks .The trends in wireless networks over the last few years is same as growth of internet. Wireless networks have reduced the human intervention for accessing data at various sites .It is achieved by replacing wired infrastructure with wireless infrastructure. Some of the key challenges in wireless networks are Signal weakening, movement, increase data rate, minimizing size and cost, security of user and QoS (Quality of service) parameters... The goal of this paper is to minimize challenges that are in way of our understanding of wireless network and wireless network performance.


2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Sign in / Sign up

Export Citation Format

Share Document