scholarly journals Prefetching in HTTP to improve performance in the delivery of web pages

2021 ◽  
Author(s):  
Adam Serbinski

In this work, two enhancements are presented to improve the performance of complete web page delivery. The first enhancement is the addition of disk to memory prefetching in Apache HTTP Server. This results in the number of files served from the web server cache to increase by 27%. The second enhancement is an addition to the HTTP protocol, allowing for server to client prefetching. Server to client prefetching eliminates the need for the web client to request the embedded objects of web pages. This modification results in page load times reduced to as low as 24.5% of the non-prefetching page load times.

2021 ◽  
Author(s):  
Adam Serbinski

In this work, two enhancements are presented to improve the performance of complete web page delivery. The first enhancement is the addition of disk to memory prefetching in Apache HTTP Server. This results in the number of files served from the web server cache to increase by 27%. The second enhancement is an addition to the HTTP protocol, allowing for server to client prefetching. Server to client prefetching eliminates the need for the web client to request the embedded objects of web pages. This modification results in page load times reduced to as low as 24.5% of the non-prefetching page load times.


Author(s):  
Carmen Domínguez-Falcón ◽  
Domingo Verano-Tacoronte ◽  
Marta Suárez-Fuentes

Purpose The strong regulation of the Spanish pharmaceutical sector encourages pharmacies to modify their business model, giving the customer a more relevant role by integrating 2.0 tools. However, the study of the implementation of these tools is still quite limited, especially in terms of a customer-oriented web page design. This paper aims to analyze the online presence of Spanish community pharmacies by studying the profile of their web pages to classify them by their degree of customer orientation. Design/methodology/approach In total, 710 community pharmacies were analyzed, of which 160 had Web pages. Using items drawn from the literature, content analysis was performed to evaluate the presence of these items on the web pages. Then, after analyzing the scores on the items, a cluster analysis was conducted to classify the pharmacies according to the degree of development of their online customer orientation strategy. Findings The number of pharmacies with a web page is quite low. The development of these websites is limited, and they have a more informational than relational role. The statistical analysis allows to classify the pharmacies in four groups according to their level of development Practical implications Pharmacists should make incremental use of their websites to facilitate real two-way communication with customers and other stakeholders to maintain a relationship with them by having incorporated the Web 2.0 and social media (SM) platforms. Originality/value This study analyses, from a marketing perspective, the degree of Web 2.0 adoption and the characteristics of the websites, in terms of aiding communication and interaction with customers in the Spanish pharmaceutical sector.


Information ◽  
2018 ◽  
Vol 9 (9) ◽  
pp. 228 ◽  
Author(s):  
Zuping Zhang ◽  
Jing Zhao ◽  
Xiping Yan

Web page clustering is an important technology for sorting network resources. By extraction and clustering based on the similarity of the Web page, a large amount of information on a Web page can be organized effectively. In this paper, after describing the extraction of Web feature words, calculation methods for the weighting of feature words are studied deeply. Taking Web pages as objects and Web feature words as attributes, a formal context is constructed for using formal concept analysis. An algorithm for constructing a concept lattice based on cross data links was proposed and was successfully applied. This method can be used to cluster the Web pages using the concept lattice hierarchy. Experimental results indicate that the proposed algorithm is better than previous competitors with regard to time consumption and the clustering effect.


Author(s):  
Satinder Kaur ◽  
Sunil Gupta

Inform plays a very important role in life and nowadays, the world largely depends on the World Wide Web to obtain any information. Web comprises of a lot of websites of every discipline, whereas websites consists of web pages which are interlinked with each other with the help of hyperlinks. The success of a website largely depends on the design aspects of the web pages. Researchers have done a lot of work to appraise the web pages quantitatively. Keeping in mind the importance of the design aspects of a web page, this paper aims at the design of an automated evaluation tool which evaluate the aspects for any web page. The tool takes the HTML code of the web page as input, and then it extracts and checks the HTML tags for the uniformity. The tool comprises of normalized modules which quantify the measures of design aspects. For realization, the tool has been applied on four web pages of distinct sites and design aspects have been reported for comparison. The tool will have various advantages for web developers who can predict the design quality of web pages and enhance it before and after implementation of website without user interaction.


2015 ◽  
Vol 5 (1) ◽  
pp. 41-55 ◽  
Author(s):  
Sutirtha Kumar Guha ◽  
Anirban Kundu ◽  
Rana Duttagupta

In this paper the authors are going to propose a new rank measurement technique by introducing weightage factor based on number of Web links available on a particular Web page. Available Web links are considered as an important importance indicator. Distinct weightage factor is assigned to the Web pages as these are calculated based on the Web links. Different Web pages are evaluated more accurately due to the independent and uniqueness of weightage factor. Better Web page ranking is achieved as it depends on specific weightage factor. Impact of unwanted intruder is minimized by the introduction of this weightage factor.


2009 ◽  
pp. 2616-2631
Author(s):  
Davide Mula ◽  
Mirko Luca Lobina

Nowadays the Web page is one of the most common medium used by people, institutions, and companies to promote themselves, to share knowledge, and to get through to every body in every part of the world. In spite of that, the Web page does not entitle one to a specific legal protection and because of this, every investment of time and money that stays off-stage is not protected by an unlawfully used. Seeing that no country in the world has a specific legislation on this issue in this chapter, we develop a theory that wants to give legal protection to Web pages using laws and treatment that are just present. In particular, we have developed a theory that considers Web pages as a database, so extends a database’s legal protection to Web pages. We start to analyze each component of a database and to find them in a Web page so that we can compare those juridical goods. After that, we analyze present legislation concerning databases and in particular, World Intellectual Property Organization Copyright Treatments and European Directive 96/92/CE, which we consider as the better legislation in this field. In the end, we line future trends that seem to appreciate and apply our theory.


Author(s):  
John DiMarco

Web authoring is the process of developing Web pages. The Web development process requires you to use software to create functional pages that will work on the Internet. Adding Web functionality is creating specific components within a Web page that do something. Adding links, rollover graphics, and interactive multimedia items to a Web page creates are examples of enhanced functionality. This chapter demonstrates Web based authoring techniques using Macromedia Dreamweaver. The focus is on adding Web functions to pages generated from Macromedia Fireworks and to overview creating Web pages from scratch using Dreamweaver. Dreamweaver and Fireworks are professional Web applications. Using professional Web software will benefit you tremendously. There are other ways to create Web pages using applications not specifically made to create Web pages. These applications include Microsoft Word and Microsoft PowerPoint. The use of Microsoft applications for Web page development is not covered in this chapter. However, I do provide steps on how to use these applications for Web page authoring within the appendix of this text. If you feel that you are more comfortable using the Microsoft applications or the Macromedia applications simply aren’t available to you yet, follow the same process for Web page conceptualization and content creation and use the programs available to you. You should try to get Web page development skills using Macromedia Dreamweaver because it helps you expand your software skills outside of basic office applications. The ability to create a Web page using professional Web development software is important to building a high-end computer skills set. The main objectives of this chapter are to get you involved in some technical processes that you’ll need to create the Web portfolio. Focus will be on guiding you through opening your sliced pages, adding links, using tables, creating pop up windows for content and using layers and timelines for dynamic HTML. The coverage will not try to provide a complete tutorial set for Macromedia Dreamweaver, but will highlight essential techniques. Along the way you will get pieces of hand coded action scripts and JavaScripts. You can decide which pieces you want to use in your own Web portfolio pages. The techniques provided are a concentrated workflow for creating Web pages. Let us begin to explore Web page authoring.


Author(s):  
Paolo Giudici ◽  
Paola Cerchiello

The aim of this contribution is to show how the information, concerning the order in which the pages of a Web site are visited, can be profitably used to predict the visit behaviour at the site. Usually every click corresponds to the visualization of a Web page. Thus, a Web clickstream defines the sequence of the Web pages requested by a user. Such a sequence identifies a user session.


Author(s):  
Jie Zhao ◽  
Jianfei Wang ◽  
Jia Yang ◽  
Peiquan Jin

Company acquisition relation reflects a company's development intent and competitive strategies, which is an important type of enterprise competitive intelligence. In the traditional environment, the acquisition of competitive intelligence mainly relies on newspapers, internal reports, and so on, but the rapid development of the Web introduces a new way to extract company acquisition relation. In this paper, the authors study the problem of extracting company acquisition relation from huge amounts of Web pages, and propose a novel algorithm for company acquisition relation extraction. The authors' algorithm considers the tense feature of Web content and classification technology of semantic strength when extracting company acquisition relation from Web pages. It first determines the tense of each sentence in a Web page, which is then applied in sentences classification so as to evaluate the semantic strength of the candidate sentences in describing company acquisition relation. After that, the authors rank the candidate acquisition relations and return the top-k company acquisition relation. They run experiments on 6144 pages crawled through Google, and measure the performance of their algorithm under different metrics. The experimental results show that the algorithm is effective in determining the tense of sentences as well as the company acquisition relation.


2018 ◽  
Vol 7 (3.6) ◽  
pp. 106
Author(s):  
B J. Santhosh Kumar ◽  
Kankanala Pujitha

Application uses URL as contribution for Web Application Vulnerabilities recognition. if the length of URL is too long then it will consume more time to scan the URL (Ain Zubaidah et.al 2014).Existing system can notice the web pages but not overall web application. This application will test for URL of any length using String matching algorithm. To avoid XSS and CSRF and detect attacks that try to sidestep program upheld arrangements by white list and DOM sandboxing techniques (Elias Athanasopoulos et.al.2012). The web application incorporates a rundown of cryptographic hashes of legitimate (trusted) client side contents. In the event that there is a cryptographic hash for the content in the white list. On the off chance that the hash is discovered the content is viewed as trusted or not trusted. This application makes utilization of SHA-1 for making a message process. The web server stores reliable scripts inside div or span HTML components that are attribute as reliable. DOM sandboxing helps in identifying the script or code. Partitioning Program Symbols into Code and Non-code. This helps to identify any hidden code in trusted tag, which bypass web server. Scanning the website for detecting the injection locations and injecting the mischievous XSS assault vectors in such infusion focuses and check for these assaults in the helpless web application( Shashank Gupta et.al 2015).The proposed application improve the false negative rate.  


Sign in / Sign up

Export Citation Format

Share Document