caching mechanism
Recently Published Documents


TOTAL DOCUMENTS

112
(FIVE YEARS 26)

H-INDEX

12
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Maninderpal Singh ◽  
Gagangeet Singh Aujla ◽  
Rasmeet Singh Bali

AbstractInternet of Drones (IoD) facilitates the autonomous operations of drones into every application (warfare, surveillance, photography, etc) across the world. The transmission of data (to and fro) related to these applications occur between the drones and the other infrastructure over wireless channels that must abide to the stringent latency restrictions. However, relaying this data to the core cloud infrastructure may lead to a higher round trip delay. Thus, we utilize the cloud close to the ground, i.e., edge computing to realize an edge-envisioned IoD ecosystem. However, as this data is relayed over an open communication channel, it is often prone to different types of attacks due to it wider attack surface. Thus, we need to find a robust solution that can maintain the confidentiality, integrity, and authenticity of the data while providing desired services. Blockchain technology is capable to handle these challenges owing to the distributed ledger that store the data immutably. However, the conventional block architecture pose several challenges because of limited computational capabilities of drones. As the size of blockchain increases, the data flow also increases and so does the associated challenges. Hence, to overcome these challenges, in this work, we have proposed a derived blockchain architecture that decouples the data part (or block ledger) from the block header and shifts it to off-chain storage. In our approach, the registration of a new drone is performed to enable legitimate access control thus ensuring identity management and traceability. Further, the interactions happen in the form of transactions of the blockchain. We propose a lightweight consensus mechanism based on the stochastic selection followed by a transaction signing process to ensure that each drone is in control of its block. The proposed scheme also handles the expanding storage requirements with the help of data compression using a shrinking block mechanism. Lastly, the problem of additional delay anticipated due to drone mobility is handled using a multi-level caching mechanism. The proposed work has been validated in a simulated Gazebo environment and the results are promising in terms of different metrics. We have also provided numerical validations in context of complexity, communication overheads and computation costs.


2021 ◽  
Author(s):  
◽  
Phillip Lee-Ming Wong

<p>One of the greater issues in Genetic Programming (GP) is the computational effort required to run the evolution and discover a good solution. Phenomena such as program bloating (where genetic programs rapidly grow in size) can quickly exhaust available memory resources and slow down the evolutionary process, while the heavy cost of performing fitness evaluation can make problems which have a lot of available data very slow to solve. These issues may limit GP in some tasks it can appropriately be applied to, as well as inhibit its applications in time/space sensitive environments. In this thesis, we look at developing solutions to some of these issues in GP computational cost. First, we develop an algebraic program simplification method based on simple rules and hashing techniques, and use this method in conjunction with the standard GP on a variety of tasks. Our results suggest that program simplification can lead to a significant reduction in program size, while not significantly changing the effectiveness of the systems in finding solution programs. Secondly, we analyse the effects of program simplification on the internal GP "building blocks" to investigate whether simplification is a destructive or constructive force. Using two models for building blocks (numerical-nodes and the more complex fixed-depth subtree), we track building blocks through GP runs on a symbolic regression problem, both with and without using simplification. We find that the program simplification process can both disrupt and construct building blocks in the GP populations. However, GP systems using simplification appear to retain important building blocks, and the simplification process appears to lead to an increase in genetic diversity. These may help explain why using simplification does not reduce the effectiveness of GP systems in solving tasks. Lastly, we develop two methods of reducing the cost of fitness evaluation by reducing the number of node evaluations performed. The first method is elitism avoidance, which avoids re-evaluating programs which have been placed in the population using elitismreproduction. Thismethod reduces the CPU time for evolving solutions for six different GP tasks. The second method is a subtree caching mechanism which store fitness evaluations for subtrees in a cache so that they may be fetched when these subtrees are encountered in future fitness evaluations. Results suggest that using this mechanism can significantly reduce both the number of node evaluations and the overall CPU time used in evolving solutions, without reducing the fitness of the solutions produced.</p>


2021 ◽  
Author(s):  
◽  
Phillip Lee-Ming Wong

<p>One of the greater issues in Genetic Programming (GP) is the computational effort required to run the evolution and discover a good solution. Phenomena such as program bloating (where genetic programs rapidly grow in size) can quickly exhaust available memory resources and slow down the evolutionary process, while the heavy cost of performing fitness evaluation can make problems which have a lot of available data very slow to solve. These issues may limit GP in some tasks it can appropriately be applied to, as well as inhibit its applications in time/space sensitive environments. In this thesis, we look at developing solutions to some of these issues in GP computational cost. First, we develop an algebraic program simplification method based on simple rules and hashing techniques, and use this method in conjunction with the standard GP on a variety of tasks. Our results suggest that program simplification can lead to a significant reduction in program size, while not significantly changing the effectiveness of the systems in finding solution programs. Secondly, we analyse the effects of program simplification on the internal GP "building blocks" to investigate whether simplification is a destructive or constructive force. Using two models for building blocks (numerical-nodes and the more complex fixed-depth subtree), we track building blocks through GP runs on a symbolic regression problem, both with and without using simplification. We find that the program simplification process can both disrupt and construct building blocks in the GP populations. However, GP systems using simplification appear to retain important building blocks, and the simplification process appears to lead to an increase in genetic diversity. These may help explain why using simplification does not reduce the effectiveness of GP systems in solving tasks. Lastly, we develop two methods of reducing the cost of fitness evaluation by reducing the number of node evaluations performed. The first method is elitism avoidance, which avoids re-evaluating programs which have been placed in the population using elitismreproduction. Thismethod reduces the CPU time for evolving solutions for six different GP tasks. The second method is a subtree caching mechanism which store fitness evaluations for subtrees in a cache so that they may be fetched when these subtrees are encountered in future fitness evaluations. Results suggest that using this mechanism can significantly reduce both the number of node evaluations and the overall CPU time used in evolving solutions, without reducing the fitness of the solutions produced.</p>


2021 ◽  
pp. 23-36
Author(s):  
Yan Zhang

AbstractEdge caching is a key part of mobile edge computing. It not only can support the necessary task data for edge computing, but also enables powerful Internet of Things applications with massive amounts of data and various types of information in access networks. In this chapter, we present the architecture of the edge caching mechanism and introduce metrics for evaluating caching performance. We then discuss key issues in caching topology design, caching data scheduling, as well as caching server cooperation and present a case study of artificial intelligence–empowered edge caching.


2020 ◽  
Vol 175 (19) ◽  
pp. 36-46
Author(s):  
Jay Parekh ◽  
Apurv Moroney ◽  
Lavina Golani ◽  
Radha Shankarmani

Sign in / Sign up

Export Citation Format

Share Document