optimal node
Recently Published Documents


TOTAL DOCUMENTS

129
(FIVE YEARS 40)

H-INDEX

12
(FIVE YEARS 3)

Author(s):  
Shaobo Yang ◽  
Linyun Xiong ◽  
Sunhua Huang ◽  
Yalan He ◽  
Penghan Li ◽  
...  

2021 ◽  
pp. 512-522
Author(s):  
Javier Díez-González ◽  
Rubén Álvarez ◽  
Paula Verde ◽  
Rubén Ferrero-Guillén ◽  
Alberto Martínez-Gutiérrez ◽  
...  

Over time, an exorbitant data quantity is generating which indeed requires a shrewd technique for handling such a big database to smoothen the data storage and disseminating process. Storing and exploiting such big data quantities require enough capable systems with a proactive mechanism to meet the technological challenges too. The available traditional Distributed File System (DFS) becomes inevitable while handling the dynamic variations and requires undefined settling time. Therefore, to address such huge data handling challenges, a proactive grid base data management approach is proposed which arranges the huge data into various tiny chunks called grids and makes the placement according to the currently available slots. The data durability and computation speed have been aligned by designing data disseminating and data eligibility replacement algorithms. This approach scrumptiously enhances the durability of data accessing and writing speed. The performance has been tested through numerous grid datasets and therefore, chunks have been analysed through various iterations by fixing the initial chunks statistics, then making a predefined chunk suggestion and then relocating the chunks after the substantial iterations and found that chunks are in an optimal node from the first iteration of replacement which is more than 21% of working clusters as compared to the traditional approach.


2021 ◽  
Vol 15 (1) ◽  
pp. 46-58
Author(s):  
Xuanhe Zhou ◽  
Guoliang Li ◽  
Chengliang Chai ◽  
Jianhua Feng

Query rewrite transforms a SQL query into an equivalent one but with higher performance. However, SQL rewrite is an NP-hard problem, and existing approaches adopt heuristics to rewrite the queries. These heuristics have two main limitations. First, the order of applying different rewrite rules significantly affects the query performance. However, the search space of all possible rewrite orders grows exponentially with the number of query operators and rules and it is rather hard to find the optimal rewrite order. Existing methods apply a pre-defined order to rewrite queries and will fall in a local optimum. Second, different rewrite rules have different benefits for different queries. Existing methods work on single plans but cannot effectively estimate the benefits of rewriting a query. To address these challenges, we propose a policy tree based query rewrite framework, where the root is the input query and each node is a rewritten query from its parent. We aim to explore the tree nodes in the policy tree to find the optimal rewrite query. We propose to use Monte Carlo Tree Search to explore the policy tree, which navigates the policy tree to efficiently get the optimal node. Moreover, we propose a learning-based model to estimate the expected performance improvement of each rewritten query, which guides the tree search more accurately. We also propose a parallel algorithm that can explore the tree search in parallel in order to improve the performance. Experimental results showed that our method significantly outperformed existing approaches.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256604
Author(s):  
Maurits H. W. Oostenbroek ◽  
Marco J. van der Leij ◽  
Quinten A. Meertens ◽  
Cees G. H. Diks ◽  
Heleen M. Wortelboer

The influence maximization problem (IMP) as classically formulated is based on the strong assumption that “chosen” nodes always adopt the new product. In this paper we propose a new influence maximization problem, referred to as the “Link-based Influence Maximization Problem” (LIM), which differs from IMP in that the decision variable of the spreader has changed from choosing an optimal seed to selecting an optimal node to influence in order to maximize the spread. Based on our proof that LIM is NP-hard with a monotonic increasing and submodular target function, we propose a greedy algorithm, GLIM, for optimizing LIM and use numerical simulation to explore the performance in terms of spread and computation time in different network types. The results indicate that the performance of LIM varies across network types. We illustrate LIM by applying it in the context of a Dutch national health promotion program for prevention of youth obesity within a network of Dutch schools. GLIM is seen to outperform the other methods in all network types at the cost of a higher computation time. These results suggests that GLIM may be utilized to increase the effectiveness of health promotion programs.


Author(s):  
Amna Mubashar ◽  
Kalsoom Asghar ◽  
Abdul Rehman Javed ◽  
Muhammad Rizwan ◽  
Gautam Srivastava ◽  
...  

Centralized Personal Health Records (PHR) are mutable with compromised security as it may lead to a single point of failure. Confidentiality, protection and security are the common issues in clinical record frameworks. Specific security and protection schemes are being used to secure clinical records. Accordingly, using the Interplanetary File System (IPFS), a decentralized PHR can be maintained to allow patients to access their records without delay. Moreover, a Kademlia-based distributed hash table provides fault tolerance and enables patients to keep track of their medical history. However, a significant issue in IPFS is data availability. It is only available on the web until users or hosts of the network request each peer, later it leads to a permanent loss of data. We propose an architecture that aims to provide faster retrieval and constant PHR availability using Blockchain and IPFS. The results show that an optimal node is selected in each iteration amongst all the available adjacent nodes.


Sign in / Sign up

Export Citation Format

Share Document