Review dan Analisa Faktor-Faktor Yang Mempengaruhi Kecepatan Akses Halaman Website

2019 ◽  
Vol 11 (1) ◽  
pp. 38-45
Author(s):  
Himawan Wijaya

The change in the behavior of internet users from using computers or laptops to mobile internet users makes changes in the way the browser and also the web pages display information. Internet users generally want a quick access time when visiting a website page to get the desired information. In the research conducted in the writing of this journal, the researchers wanted to show and explain the several important factors that influence the speed of access from a website page, as well as analyzing based on technical factors. Where the main discussion in this study will focus more on the evaluation of technical factors starting from the programming side (server side programming and client side programming) and also the design of the user interface using web pages using minify CSS along with the use of AJAX technology. The results to be achieved from this study are to identify how much influence the technical factors mentioned above have on the speed of visitor access to a web page, apart from other technical factors such as internet network speed, devices and areas where users can access website page.

Author(s):  
Bouchra Frikh ◽  
Brahim Ouhbi

The World Wide Web has emerged to become the biggest and most popular way of communication and information dissemination. Every day, the Web is expending and people generally rely on search engine to explore the web. Because of its rapid and chaotic growth, the resulting network of information lacks of organization and structure. It is a challenge for service provider to provide proper, relevant and quality information to the internet users by using the web page contents and hyperlinks between web pages. This paper deals with analysis and comparison of web pages ranking algorithms based on various parameters to find out their advantages and limitations for ranking web pages and to give the further scope of research in web pages ranking algorithms. Six important algorithms: the Page Rank, Query Dependent-PageRank, HITS, SALSA, Simultaneous Terms Query Dependent-PageRank (SQD-PageRank) and Onto-SQD-PageRank are presented and their performances are discussed.


Mobile precise internet web sites dissent drastically from their computer laptop equivalents in cloth, format and functionality. Sooner or later, present techniques to sight detrimental net internet internet sites rectangular movement now not probably to determine for such webpages. During this paper, we often typically have a propensity to format and exercising paintings over, a mechanism that distinguishes amongst terrible and benign mobile net net web sites. Activity over makes this energy of will supported normal picks of a net internet web page beginning with the quantity of iframes to the life of identified dishonourable cellular mobile cellphone numbers. First, we have a tendency to via attempting out show the requirement for mobile information strategies so installation a spread of new regular options that very correlate with cellular malicious pages. We will be predisposed to then use work over to a dataset of over 350,000 famous benign similarly to volatile cellular webpages and show 90th accuracy in splendor. In addition, we frequently normally normally have a tendency to discover, end up aware of and furthermore document choice of websites incomprehensible through Google Safe Surfing and furthermore Virus Total, however decided through art work over. Lastly, we will be inclined to growth a web browser extension victimization undertaking over to comfortable customers from damaging mobile internet web sites in length. In doing consequently, we provide the number one everyday assessment technique to view volatile cellular webpages


2017 ◽  
Vol 23 (4) ◽  
pp. 192-197 ◽  
Author(s):  
Lori Northrup ◽  
Ed Cherry ◽  
Della Darby

Frustrated by the time-consuming process of updating subject Web pages, librarians at Samford University Library (SUL) developed a process for streamlining updates using Server-Side Include (SST) commands. They created text files on the library server that corresponded to each of 143 online resources. Include commands within the HTML document for each subject page refer to these text files, which are pulled into the page as it loads on the user's browser. For the user, the process is seamless. For librarians, time spent in updating Web pages is greatly reduced; changes to text files on the server result in simultaneous changes to the edited resources across the library's Web site. For small libraries with limited online resources, this process may provide an elegant solution to an ongoing problem.


2014 ◽  
Vol 614 ◽  
pp. 558-561
Author(s):  
Dan Mei You ◽  
Hui Qin Wei

In the use of Ajax technology on web pages, it is needed to execute client-side Javascript code to present dynamic information. This will result in that a lot of useful data cannot be retrieved with search engine. This paper will introduce a method based on calling browser API and, add the preprocessing stage and put forward effective element specification before trigger the Ajax page events. Besides, several experiments are introduced to testify it’s feasibility and effectiveness.


Author(s):  
Francisco Yus

In this chapter the author analyzes, from a cognitive pragmatics point of view and, more specifically, from a relevance-theoretic approach, the way Internet users assess the qualities of web pages in their search for optimally relevant interpretive outcomes. The relevance of a web page is measured as a balance between the interest that information provides (the so-called “positive cognitive effects” in relevance theory terminology) and the mental effort involved in their extraction. On paper, optimal relevance is achieved when the interest is high and the effort involved is low. However, as the relevance grid in this chapter shows, there are many possible combinations when measuring the relevance of content on web pages. The author also addresses how the quality and design of web pages may influence the way balances of interest (cognitive effects) and mental effort are assessed by users when processing the information contained on the web page. The analysis yields interesting implications on how web pages should be designed and on web usability in general.


2013 ◽  
Vol 4 (2) ◽  
pp. 74-78
Author(s):  
Gina Akmalia ◽  
Elvyna Tunggawan ◽  
Kevin Sungiardi ◽  
Alfian Lazuardi

Proxy server is the intermediary between client and the Internet. The usage of proxy server is one of many means to avoid excessive access to the Internet. Proxy server’s cache will save every web page which has been accessed, so clients can save their bandwidth when they access same web pages repeatedly. Distributed proxy is a collection of connected proxy servers which has shared cache. This research will prove that distributed proxy server can shorten data access time and minimize bandwidth. On this research, we use Squid Proxy Server on Windows 7 as the main tool. Index Terms - distributed proxy, shared cache, Squid


Author(s):  
Fabio Boldrin ◽  
Chiara Taddia ◽  
Gianluca Mazzini

This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of JavaScript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex® technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.


Author(s):  
Fabio Boldrin ◽  
Chiara Taddia ◽  
Gianluca Mazzini

This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of Javascript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex®technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.


Author(s):  
Ayotokunbo I. Ajewole

This chapter discusses basic software that should be found in a typical cybercafé setup. ‘Software requirements’ are broadly divided into requirements for the server side and client side of the network. As a commercial venture, it is of great importance that only necessary software be found installed in a cybercafé, to meet the café user’s need. Starting with an introduction, a general overview of the café user’s needs are set forth, thereby leading to the division into two broad areas: server and client side. An outline of the chapter is given. The background section gives some details on some basic terminologies (server, client, operating system, etc.) that a cybercafé user/prospective operator might want to get acquainted with. Software requirements for the server side of the café network are discussed first; general features of timing softwares, notes on Internet security, viruses, and spyware. As the café server is a very important element in café management, it is necessary that the server is not overwhelmed by unnecessary tasks thereby leading to a generally slow network. Software for the client side of the café network is discussed next with emphasis on basic software applications often used/requested by café users. Examples of such are word processing applications and graphics viewing software. Since a lot of computer literate people are familiar with the Windows operating system, all software discussed for client use derives from such a perspective. Some security issues necessary for maintaining crisp client computers are also discussed. Due to lack of in-depth knowledge about information security among internet users, the future trends section discusses the applicability of the personal Internet communicator in the Nigerian environment, because of its portability and built in security. Other possible trends in security and cyber-crime are also discussed. The chapter ends with a note that café users will continue to demand faster Internet speeds, and therefore operators must be on the search for latest software to meet their needs, latest security software to keep their café network always clean and secure. Future research directions include software development research to allow café users modify or design their desktops to their own taste while in the café. Complete café solution software is also proposed to cater for everything from operating systems to end-user applications which can be installed once, and from a single source.Literature used for this chapter is sourced mainly from the Internet, and from personal experience of the author, as there are not literatures dealing with ‘cybercafé software’ on a specific note. A whole lot of software are been used, and could be used in a café. This depends actually on general client’s requirements and/or the operator’s amount of know-how /preferences which may vary across different environments.


In a web based application phishing attack plays a vital role. To find a solution for this problem, lots of work is carried out over a year, but still now no solution is find out for this problem. The existing solution, suffers from a few drawbacks such as to count potential to compromise consumer privacy. That is the reason for difficulty of detecting phishing attacks in the websites. In addition to this problem, the website content is changed dynamically, and confidence depends on the features of specific provisions of data. To solve these issues, a new direction for the detection of phishing attacks in web-pages is approached here. The proposed system, inherent the phishing limits starting from the constraints they faced while built a web-page. subsequently the implementation of our approach includes, off-the-hook- focused on extraordinary precision and brand-independence and semantic individuality. Here the off-the-hook is constructed from the fully-client-side browser add-on, which describes the user privacy. Additionally, off-the-hook focused on the target website and the phishing webpage is attempting to imitate and comprises this objective with warning. The proposed method is evaluated our genetic algorithm in below user studies.


Sign in / Sign up

Export Citation Format

Share Document