Perceived and Actual Web Page Loading Delay

2010 ◽  
Vol 3 (2) ◽  
pp. 50-66 ◽  
Author(s):  
Mohamed El Louadi ◽  
Imen Ben Ali

The major complaint users have about using the Web is that they must wait for information to load onto their screen. This is more acute in countries where bandwidth is limited and fees are high. Given bandwidth limitations, Web pages are often hard to accelerate. Predictive feedback information is assumed to distort Internet users’ perception of time, making them more tolerant of low speed. This paper explores the relationship between actual Web page loading delay and perceived Web page loading delay and two aspects of user satisfaction: the Internet user’s satisfaction with the Web page loading delay and satisfaction with the Web page displayed. It also investigates whether predictive feedback information can alter Internet user’s perception of time. The results show that, though related, perceived time and actual time differ slightly in their effect on satisfaction. In this case, it is the perception of time that counts. The results also show that the predictive feedback information displayed on the Web page has an effect on the Internet user’s perception of time, especially in the case of slow Web pages.

Author(s):  
John DiMarco

Web authoring is the process of developing Web pages. The Web development process requires you to use software to create functional pages that will work on the Internet. Adding Web functionality is creating specific components within a Web page that do something. Adding links, rollover graphics, and interactive multimedia items to a Web page creates are examples of enhanced functionality. This chapter demonstrates Web based authoring techniques using Macromedia Dreamweaver. The focus is on adding Web functions to pages generated from Macromedia Fireworks and to overview creating Web pages from scratch using Dreamweaver. Dreamweaver and Fireworks are professional Web applications. Using professional Web software will benefit you tremendously. There are other ways to create Web pages using applications not specifically made to create Web pages. These applications include Microsoft Word and Microsoft PowerPoint. The use of Microsoft applications for Web page development is not covered in this chapter. However, I do provide steps on how to use these applications for Web page authoring within the appendix of this text. If you feel that you are more comfortable using the Microsoft applications or the Macromedia applications simply aren’t available to you yet, follow the same process for Web page conceptualization and content creation and use the programs available to you. You should try to get Web page development skills using Macromedia Dreamweaver because it helps you expand your software skills outside of basic office applications. The ability to create a Web page using professional Web development software is important to building a high-end computer skills set. The main objectives of this chapter are to get you involved in some technical processes that you’ll need to create the Web portfolio. Focus will be on guiding you through opening your sliced pages, adding links, using tables, creating pop up windows for content and using layers and timelines for dynamic HTML. The coverage will not try to provide a complete tutorial set for Macromedia Dreamweaver, but will highlight essential techniques. Along the way you will get pieces of hand coded action scripts and JavaScripts. You can decide which pieces you want to use in your own Web portfolio pages. The techniques provided are a concentrated workflow for creating Web pages. Let us begin to explore Web page authoring.


Author(s):  
Bouchra Frikh ◽  
Brahim Ouhbi

The World Wide Web has emerged to become the biggest and most popular way of communication and information dissemination. Every day, the Web is expending and people generally rely on search engine to explore the web. Because of its rapid and chaotic growth, the resulting network of information lacks of organization and structure. It is a challenge for service provider to provide proper, relevant and quality information to the internet users by using the web page contents and hyperlinks between web pages. This paper deals with analysis and comparison of web pages ranking algorithms based on various parameters to find out their advantages and limitations for ranking web pages and to give the further scope of research in web pages ranking algorithms. Six important algorithms: the Page Rank, Query Dependent-PageRank, HITS, SALSA, Simultaneous Terms Query Dependent-PageRank (SQD-PageRank) and Onto-SQD-PageRank are presented and their performances are discussed.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 2011-2016

With the boom in the number of internet pages, it is very hard to discover desired records effortlessly and fast out of heaps of web pages retrieved with the aid of a search engine. there may be a increasing requirement for automatic type strategies with more class accuracy. There are a few conditions these days in which it's far vital to have an green and reliable classification of a web-web page from the information contained within the URL (Uniform aid Locator) handiest, with out the want to go to the web page itself. We want to understand if the URL can be used by us while not having to look and visit the page due to numerous motives. Getting the web page content material and sorting them to discover the genre of the net web page is very time ingesting and calls for the consumer to recognize the shape of the web page which needs to be categorised. To avoid this time-eating technique we proposed an exchange method so one can help us get the genre of the entered URL based of the entered URL and the metadata i.e., description, keywords used in the website along side the title of the web site. This approach does not most effective rely upon URL however also content from the internet application. The proposed gadget can be evaluated using numerous available datasets.


2005 ◽  
Vol 5 (3) ◽  
pp. 255-268 ◽  
Author(s):  
Russell Williams ◽  
Rulzion Rattray

Organisations increasingly use the internet and web to communicate with the marketplace. Indeed, the hotel industry seems particularly suited to the use of these technologies. Many sites are not accessible to large segments of the disabled community, however, or to individuals using particular hard and softwares. Identifying the competitive and legal mandates for website accessibility, the study looks at the accessibility of UK-based hotel websites. Utilising the accessibility software, Bobby, as well as making some additional manual accessibility checks, the study finds disappointingly low levels of website accessibility. If organisations want to make more effective use of the web then they need to ensure that their web pages are designed from the outside-in — from the user's perspective.


2018 ◽  
Vol 173 ◽  
pp. 03020
Author(s):  
Lu Xing-Hua ◽  
Ye Wen-Quan ◽  
Liu Ming-Yuan

In order to improve the user ' s ability to access websites and web pages, according to the interest preference of the user, the personalized recommendation design is carried out, and the personalized recommendation model for web page visit is established to meet the personalized interest demand of the user to browse the web page. A webpage personalized recommendation algorithm based on association rule mining is proposed. Based on the semantic features of web pages, user browsing behavior is calculated by similarity computation, and web crawler algorithm is constructed to extract the semantic features of web pages. The autocorrelation matching method is used to match the features of web page and user browsing behavior, and the association rules feature quantity of user browsing website behavior is mined. According to the semantic relevance and semantic information of web users to search words, fuzzy registration is taken, Web personalized recommendation is obtained to meet the needs of the users browse the web. The simulation results show that the method is accurate and user satisfaction is higher.


Author(s):  
Francisco Yus

In this chapter the author analyzes, from a cognitive pragmatics point of view and, more specifically, from a relevance-theoretic approach, the way Internet users assess the qualities of web pages in their search for optimally relevant interpretive outcomes. The relevance of a web page is measured as a balance between the interest that information provides (the so-called “positive cognitive effects” in relevance theory terminology) and the mental effort involved in their extraction. On paper, optimal relevance is achieved when the interest is high and the effort involved is low. However, as the relevance grid in this chapter shows, there are many possible combinations when measuring the relevance of content on web pages. The author also addresses how the quality and design of web pages may influence the way balances of interest (cognitive effects) and mental effort are assessed by users when processing the information contained on the web page. The analysis yields interesting implications on how web pages should be designed and on web usability in general.


Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


2008 ◽  
Vol 23 (5) ◽  
pp. 438-446 ◽  
Author(s):  
Daniela B. Friedman ◽  
Manju Tanwar ◽  
Jane V.E. Richter

AbstractIntroduction:Increasingly, individuals are relying on the Internet as a major source of health information. When faced with sudden or pending disasters, people resort to the Internet in search of clear, current, and accurate instructions on how to prepare for and respond to such emergencies. Research about online health resources ascertained that information was written at the secondary education and college levels and extremely difficult for individuals with limited literacy to comprehend. This content analysis is the first to assess the reading difficulty level and format suitability of a large number of disaster and emergency preparedness Web pages intended for the general public.Objectives:The aims of this study were to: (1) assess the readability and suitability of disaster and emergency preparedness information on the Web; and (2) determine whether the reading difficulty level and suitability of online resources differ by the type of disaster or emergency and/or Website domain.Methods:Fifty Websites containing information on disaster and/or emergency preparedness were retrieved using the GoogleTM search engine. Readability testing was conducted on the first Web page, suggested by GoogleTM, addressing preparedness for the general public. The reading level was assessed using Flesch-Kincaid (F-K) and Flesch Reading Ease (FRE) measures. The Suitability Assessment of Materials (SAM) instrument was used to evaluate additional factors such as graphics, layout, and cultural appropriateness.Results:The mean F-K readability score of the 50 Websites was Grade 10.74 (95% CI = 9.93, 11.55). The mean FRE score was 45.74 (95% CI = 41.38, 50.10), a score considered “difficult”. A Web page with content about both risk and preparedness supplies was the most difficult to read according to F-K (Grade level = 12.1). Web pages with general disaster and emergency information and preparedness supplies were considered most difficult according to the FRE (38.58, 95% CI = 30.09, 47.08). The average SAM score was 48% or 0.48 (95% CI = 0.45, 0.51), implying below average suitability of these Websites. Websites on pandemics and bioterrorism were the most difficult to read (F-K: p = 0.012; FRE: p = 0.014) and least suitable (SAM: p = 0.035) compared with other disasters and emergencies.Conclusions:The results suggest the need for readily accessible preparedness resources on the Web that are easy-to-read and visually appropriate. Interdisciplinary collaborations between public health educators, risk communication specialists, and Web page creators and writers are recommended to ensure the development and dissemination of disaster and emergency resources that consider literacy abilities of the general public.


1998 ◽  
Vol 59 (6) ◽  
pp. 534-542 ◽  
Author(s):  
Xue-Ming Bao

This survey aims to collect data to enable Seton Hall University librarian faculty and administration to analyze user satisfaction with information services provided through the Internet’s World Wide Web. Seton Hall faculty and students completed 786 questionnaires. About 80 percent of the respondents reported that they used the Web on a daily or weekly basis. The results reveal valuable information about the Internet users’ search strategies and their levels of satisfaction in using the Web. Analysis of the data suggests three challenges for academic librarians and five opportunities in providing Internet information services.


Think India ◽  
2019 ◽  
Vol 22 (2) ◽  
pp. 174-187
Author(s):  
Harmandeep Singh ◽  
Arwinder Singh

Nowadays, internet satisfying people with different services related to different fields. The profit, as well as non-profit organization, uses the internet for various business purposes. One of the major is communicated various financial as well as non-financial information on their respective websites. This study is conducted on the top 30 BSE listed public sector companies, to measure the extent of governance disclosure (non-financial information) on their web pages. The disclosure index approach to examine the extent of governance disclosure on the internet was used. The governance index was constructed and broadly categorized into three dimensions, i.e., organization and structure, strategy & Planning and accountability, compliance, philosophy & risk management. The empirical evidence of the study reveals that all the Indian public sector companies have a website, and on average, 67% of companies disclosed some kind of governance information directly on their websites. Further, we found extreme variations in the web disclosure between the three categories, i.e., The Maharatans, The Navratans, and Miniratans. However, the result of Kruskal-Wallis indicates that there is no such significant difference between the three categories. The study provides valuable insights into the Indian economy. It explored that Indian public sector companies use the internet for governance disclosure to some extent, but lacks symmetry in the disclosure. It is because there is no such regulation for web disclosure. Thus, the recommendation of the study highlighted that there must be such a regulated framework for the web disclosure so that stakeholders ensure the transparency and reliability of the information.


Sign in / Sign up

Export Citation Format

Share Document