A Software Cache Mechanism for Reducing the OpenTSDB Query Time

Author(s):  
Thada Wangthammang ◽  
Pichaya Tandayya
2014 ◽  
Vol 08 (04) ◽  
pp. 515-544 ◽  
Author(s):  
Pavlos Fafalios ◽  
Panagiotis Papadakos ◽  
Yannis Tzitzikas

The integration of the classical Web (of documents) with the emerging Web of Data is a challenging vision. In this paper we focus on an integration approach during searching which aims at enriching the responses of non-semantic search systems with semantic information, i.e. Linked Open Data (LOD), and exploiting the outcome for offering advanced exploratory search services which provide an overview of the search space and allow the users to explore the related LOD. We use named entities identified in the search results for automatically connecting search hits with LOD and we consider a scenario where this entity-based integration is performed at query time with no human effort and no a-priori indexing which is beneficial in terms of configurability and freshness. However, the number of identified entities can be high and the same is true for the semantic information about these entities that can be fetched from the available LOD. To this end, in this paper we propose a Link Analysis-based method which is used for ranking (and thus selecting to show) the more important semantic information related to the search results. We report the results of a survey regarding the marine domain with promising results, and comparative results that illustrate the effectiveness of the proposed (PageRank-based) ranking scheme. Finally, we report experimental results regarding efficiency showing that the proposed functionality can be offered even at query time.


2007 ◽  
Vol 42 (7) ◽  
pp. 227-236
Author(s):  
Qin Wang ◽  
Junpu Chen ◽  
Weihua Zhang ◽  
Min Yang ◽  
Binyu Zang

2013 ◽  
Vol 380-384 ◽  
pp. 1969-1972
Author(s):  
Bo Yuan ◽  
Jin Dou Fan ◽  
Bin Liu

Traditional network processors (NPs) adopt either local memory mechanism or cache mechanism as the hierarchical memory structure. The local memory mechanism usually has small on-chip memory space which is not fit for the various complicated applications. The cache mechanism is better at dealing with the temporary data which need to be read and written frequently. But in deep packet processing, cache miss occurs when reading each segment of packet. We propose a cooperative mechanism of local memory and cache. In which the packet data and temporary data are stored into local memory and cache respectively. The analysis and experimental evaluation shows that the cooperative mechanism can improve the performance of network processors and reduce processing latency with little extra resources cost.


2012 ◽  
Vol 433-440 ◽  
pp. 3335-3339
Author(s):  
Bo Zhu Wu

Through the in-depth study of the existing distributed database query processing technology, this paper proposes a distributed database query processing program. This program optimizes the existing query processing, stores the commonly used query results according to the query frequency, to be directly used by the subsequent queries or used as intermediate query results, thus avoiding possible transmission of a large number of data, thereby reducing the query time and improving query efficiency.


1994 ◽  
Author(s):  
Leonidas I. Kontothanassis ◽  
Michael L. Scott

Sign in / Sign up

Export Citation Format

Share Document