scholarly journals Improved Performance by Combining Web Pre-Fetching Using Clustering with Web Caching Based on SVM Learning Method

Author(s):  
Kuttuva Rajendran Baskaran ◽  
Chellan Kalaiarasan

Combining Web caching and Web pre-fetching results in improving the bandwidth utilization, reducing the load on the origin server and reducing the delay incurred in accessing information. Web pre-fetching is the process of fetching the Web objects from the origin server which has more likelihood of being used in future. The fetched contents are stored in the cache. Web caching is the process of storing the popular objects ”closer” to the user so that they can be retrieved faster. In the literature many interesting works have been carried out separately for Web caching and Web pre-fetching. In this work, clustering technique is used for pre-fetching and SVM-LRU technique forWeb caching and the performance is measured in terms of Hit Ratio (HR) and Byte Hit Ratio (BHR). With the help of real data, it is demonstrated that the above approach is superior to the method of combining clustering based prefetching technique with traditional LRU page replacement method for Web caching.

Author(s):  
Sathiyamoorthi ◽  
Murali Bhaskaran

Web caching and Web pre-fetching are two important techniques for improving the performance of Web based information retrieval system. These two techniques would complement each other, since Web caching provides temporal locality whereas Web pre-fetching provides spatial locality of Web objects. However, if the web caching and pre-fetching are integrated inefficiently, this might cause increasing the network traffic as well as the Web server load. Conventional policies are most suitable only for memory caching since it involves fixed page size. But when one deals with web caching which involves pages of different size. Hence one need an efficient algorithm that works better in web cache environment. Moreover conventional replacement policies are not suitable in clustering based pre-fetching environment since multiple objects were pre-fetched. Hence, it cannot be handled by conventional algorithms. Therefore, care must be taken while integrating web caching with web pre-fetching technique in order to overcome these limitations. In this paper, novel algorithms have been proposed for integrating web caching with clustering based pre-fetching technique. Here Modified ART1 has been used for clustering based pre-fetching technique. The proposed algorithm outperforms the traditional algorithms in terms of hit rate and number of objects to be pre-fetched. Hence saves bandwidth.


Author(s):  
V. Sathiyamoorthi

Network congestion remains one of the main barriers to the continuing success of the internet and Web based services. In this background, proxy caching is one of the most successful solutions for civilizing the performance of Web since it reduce network traffic, Web server load and improves user perceived response time. Here, the most popular Web objects that are likely to be revisited in the near future are stored in the proxy server thereby it improves the Web response time and saves network bandwidth. The main component of Web caching is it cache replacement policy. It plays a key role in replacing existing objects when there is no room for new one especially when cache is full. Moreover, the conventional replacement policies are used in Web caching environments which provide poor network performance. These policies are suitable for memory caching since it involves fixed sized objects. But, Web caching which involves objects of varying size and hence there is a need for an efficient policy that works better in Web cache environment. Moreover, most of the existing Web caching policies have considered few factors and ignored the factors that have impact on the efficiency of Web proxy caching. Hence, it is decided to propose a novel policy for Web cache environment. The proposed policy includes size, cost, frequency, ageing, time of entry into the cache and popularity of Web objects in cache removal policy. It uses the Web usage mining as a technique to improve Web caching policy. Also, empirical analyses shows that proposed policy performs better than existing policies in terms of various performance metrics such as hit rate and byte hit rate.


Internet of Things (IoT) is one of the fast-growing technology paradigms used in every sectors, where in the Quality of Service (QoS) is a critical component in such systems and usage perspective with respect to ProSumers (producer and consumers). Most of the recent research works on QoS in IoT have used Machine Learning (ML) techniques as one of the computing methods for improved performance and solutions. The adoption of Machine Learning and its methodologies have become a common trend and need in every technologies and domain areas, such as open source frameworks, task specific algorithms and using AI and ML techniques. In this work we propose an ML based prediction model for resource optimization in the IoT environment for QoS provisioning. The proposed methodology is implemented by using a multi-layer neural network (MNN) for Long Short Term Memory (LSTM) learning in layered IoT environment. Here the model considers the resources like bandwidth and energy as QoS parameters and provides the required QoS by efficient utilization of the resources in the IoT environment. The performance of the proposed model is evaluated in a real field implementation by considering a civil construction project, where in the real data is collected by using video sensors and mobile devices as edge nodes. Performance of the prediction model is observed that there is an improved bandwidth and energy utilization in turn providing the required QoS in the IoT environment.


2018 ◽  
Vol 13 (1) ◽  
pp. 160-168
Author(s):  
Nandalal Rana ◽  
Krishna P Bhandari ◽  
Surendra Shrestha

 Bandwidth requirement prediction is an important part of network design and service planning. The natural way of predicting bandwidth requirement for existing network is to analyze the past trends and apply appropriate mathematical model to predict for the future. For this research, the historical usage data of FWDR network nodes of Nepal Telecom is subject to univariate linear time series ARIMA model after logit transformation to predict future bandwidth requirement. The predicted data is compared to the real data obtained from the same network and the predicted data has been found to be within 10% MAPE. This model reduces the MAPE by 11.71% and 15.42% respectively as compared to the non-logit transformed ARIMA model at 99% CI. The results imply that the logit transformed ARIMA model has better performance compared to non-logit-transformed ARIMA model. For more accurate and longer term predictions, larger dataset can be taken along with season adjustments and consideration of long term variations.Journal of the Institute of Engineering, 2017, 13(1): 160-168


2020 ◽  
Author(s):  
Viraj Shah ◽  
Chinmay Hegde

Abstract We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements). This observation model is inspired by a (relatively) less well-known imaging mechanism called modulo imaging, which can be used to extend the dynamic range of imaging systems; variations of this model have also been studied under the category of phase unwrapping. Signal reconstruction in the under-determined regime with modulo observations is a challenging ill-posed problem, and existing reconstruction methods cannot be used directly. In this paper, we propose a novel approach to solving the inverse problem limited to two modulo periods, inspired by recent advances in algorithms for phase retrieval under sparsity constraints. We show that given a sufficient number of measurements, our algorithm perfectly recovers the underlying signal and provides improved performance over other existing algorithms. We also provide experiments validating our approach on both synthetic and real data to depict its superior performance.


2010 ◽  
Author(s):  
Neena Gupta ◽  
Manish Singh ◽  
R. B. Patel ◽  
B. P. Singh

Author(s):  
Mariana Damova ◽  
Atanas Kiryakov ◽  
Maurice Grinberg ◽  
Michael K. Bergman ◽  
Frédérick Giasson ◽  
...  

The chapter introduces the process of design of two upper-level ontologies—PROTON and UMBEL—into reference ontologies and their integration in the so-called Reference Knowledge Stack (RKS). It is argued that RKS is an important step in the efforts of the Linked Open Data (LOD) project to transform the Web into a global data space with diverse real data, available for review and analysis. RKS is intended to make the interoperability between published datasets much more efficient than it is now. The approach discussed in the chapter consists of developing reference layers of upper-level ontologies by mapping them to certain LOD schemata and assigning instance data to them so they cover a reasonable portion of the LOD datasets. The chapter presents the methods (manual and semi-automatic) used in the creation of the RKS and gives examples that illustrate its advantages for managing highly heterogeneous data and its usefulness in real life knowledge intense applications.


2010 ◽  
Vol 2010 ◽  
pp. 1-11 ◽  
Author(s):  
José V. Manjón ◽  
Pierrick Coupé ◽  
Antonio Buades ◽  
D. Louis Collins ◽  
Montserrat Robles

In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Yang Chen ◽  
Weimin Yu ◽  
Yinsheng Li ◽  
Zhou Yang ◽  
Limin Luo ◽  
...  

Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA) prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV) Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA). Experiments with both simulated and real data validate the good performance of the proposed restoration.


The web platform can be seen as an auspicious candidate to provide an interoperability layer in an IoT based system with various kind of device specification and client platform leading to the transformation from IoT to WoT (Web of Things). In order to implement web platform on IoT world, we require a web compatible middleware yet still maintaining lightweight and efficient machine-to-machine (M2M) communications. In this paper we propose the web of things (WoT) middleware with publish subscribe functionality or WoTPubSub. As opposed to the existing solution, this middleware offers the utilization of lightweight MQTT protocol to perform a communication with constrained device while still maintaining the compatibility with existing web architecture. The proposed system consists of three actors: the user as Restful HTTP client, the sensing-actuating constrained device as both MQTT publisher-subscriber and the proposed middleware acting as communication bridge which translates user's HTTP request into MQTT publish-subscribe action. We consider two data flow scenarios in the proposed middleware: user obtaining data from sensing device and user giving a command to actuating device. From functional and performance testing, we conclude that the proposed middleware has been able to provide a web compatible intermediary functionality between user and sensing-actuating constrained device with improved performance compared to the existing approaches.


Sign in / Sign up

Export Citation Format

Share Document