Valuation and analysis of drug addiction web sites / Criterios de valoración y análisis de sitios web sobre drogodependencias

2004 ◽  
Vol 4 (1) ◽  
Author(s):  
David Carabantes Alarcón ◽  
Carmen García Carrión ◽  
Juan Vicente Beneit Montesinos

La calidad en Internet tiene un gran valor, y más aún cuando se trata de una página web sobre salud como es un recurso sobre drogodependencias. El presente artículo recoge los estimadores y sistemas más destacados sobre calidad web para el desarrollo de un sistema específico de valoración de la calidad de recursos web sobre drogodependencias. Se ha realizado una prueba de viabilidad mediante el análisis de las principales páginas web sobre este tema (n=60), recogiendo la valoración, desde el punto de vista del usuario, de la calidad de los recursos. Se han detectado aspectos de mejora en cuanto a la exactitud y fiabilidad de la información, autoría, y desarrollo de descripciones y valoraciones de los enlaces externos. AbstractThe quality in Internet has a great value, and still more when is a web page on health like a resource of drug dependence. This paper contains the estimators and systems on quality in the web for the development of a specific system to value the quality of a web site about drug dependence. A test of viability by means of the analysis of the main web pages has been made on this subject, gathering the valuation from the point of view of the user of the quality of the resources. Aspects of improvement as the exactitude and reliability of the information, responsibility, and development of descriptions and valuations of the external links have been detected.

Author(s):  
Satinder Kaur ◽  
Sunil Gupta

Inform plays a very important role in life and nowadays, the world largely depends on the World Wide Web to obtain any information. Web comprises of a lot of websites of every discipline, whereas websites consists of web pages which are interlinked with each other with the help of hyperlinks. The success of a website largely depends on the design aspects of the web pages. Researchers have done a lot of work to appraise the web pages quantitatively. Keeping in mind the importance of the design aspects of a web page, this paper aims at the design of an automated evaluation tool which evaluate the aspects for any web page. The tool takes the HTML code of the web page as input, and then it extracts and checks the HTML tags for the uniformity. The tool comprises of normalized modules which quantify the measures of design aspects. For realization, the tool has been applied on four web pages of distinct sites and design aspects have been reported for comparison. The tool will have various advantages for web developers who can predict the design quality of web pages and enhance it before and after implementation of website without user interaction.


Author(s):  
Dimitrios Xanthidis ◽  
David Nicholas ◽  
Paris Argyrides

This chapter is the result of a two years effort to design a template aiming at standardizing, as much as such a task is feasible, the evaluation of Web sites. It is the product of a few publications in international conferences and journals. A thorough review of the international literature on the subject led the authors to conclude there is a very large number of opinions, thoughts and criteria from different professionals involved, directly or indirectly, with the process of designing a good Web site. To make matters even more complicated there are a number of different terms used by various scholars, scientists and professionals around the world that often refer to similar, if not the same, attributes of a Web site. However, it seems that all these differences could boil down to a systematic approach, here called evaluation template, of 53 points that the design strategies of the Web sites should be checked against. This template was tested on a significant number (232) of Web sites of Greek companies and proved it can be used to evaluate the quality of Web sites not only by technology experts but by non-experts alike. The evaluation template, suggested here, is by no means the solution to the problem of standardizing the process of evaluating a Web site but looking at other work done on the subject worldwide it is a step ahead.


Author(s):  
Paolo Giudici ◽  
Paola Cerchiello

The aim of this contribution is to show how the information, concerning the order in which the pages of a Web site are visited, can be profitably used to predict the visit behaviour at the site. Usually every click corresponds to the visualization of a Web page. Thus, a Web clickstream defines the sequence of the Web pages requested by a user. Such a sequence identifies a user session.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 320
Author(s):  
Dr JKR Sastry ◽  
N Sreenidhi ◽  
K Sasidhar

Information dissemination is taking place these days heavily using web sites which are hosted on the internet. The effectiveness and effi-ciency of the design of the WEB site will have great effect on the way the content hosted on the WEB can be accessed. Quality of a web site, places a vital role in making available the required information to the end user with ease satisfying the users content requirements. A framework has been proposed comprising 42 quality metrics using which the quality of a web site can be measured. Howevercompu-tations procedures have not been stated in realistic terms.In this paper, computational procedures for measuring “usability” of a WEB site can be measured which can be included into overall computation of the quality of a web site.


2015 ◽  
Vol 12 (1) ◽  
pp. 91-114 ◽  
Author(s):  
Víctor Prieto ◽  
Manuel Álvarez ◽  
Víctor Carneiro ◽  
Fidel Cacheda

Search engines use crawlers to traverse the Web in order to download web pages and build their indexes. Maintaining these indexes up-to-date is an essential task to ensure the quality of search results. However, changes in web pages are unpredictable. Identifying the moment when a web page changes as soon as possible and with minimal computational cost is a major challenge. In this article we present the Web Change Detection system that, in a best case scenario, is capable to detect, almost in real time, when a web page changes. In a worst case scenario, it will require, on average, 12 minutes to detect a change on a low PageRank web site and about one minute on a web site with high PageRank. Meanwhile, current search engines require more than a day, on average, to detect a modification in a web page (in both cases).


2018 ◽  
Vol 7 (2.7) ◽  
pp. 138
Author(s):  
Y Venkata Raghavarao ◽  
K Sasidhar ◽  
JKR Sastry ◽  
V Chandra Prakash

Information dissemination is taking place extensively through WEB and use of internet.  The quality and reliability of the information hosted on the WEB is questionable.  Many factors are to be considered for assessing the quality of the WEB sites. The Information hosted on the WEB would become valuable only when top quality of the same is maintained. Each quality factor can have many dimensions. There should be a mechanism of computing the quality of a web site quantitatively so that quality of a web site can be realistically measured.  Any subjective or objective way of measuring quality is questionable and can be prejudiced at times. Every factor as such must be measured the entire quality of a web site must be measured considering all the factors. One can develop a norm for quality of a factor and any deviation from the norm needs to be rectified and controlled.Among other factors quality of the content hosted on the WEB plays vital role. In this paper the computational method which can be used for computing the quality of a web site is presented considering all the dimensions of   content related quality factor.


2001 ◽  
Vol 20 (4) ◽  
pp. 11-18 ◽  
Author(s):  
Cleborne D. Maddux

The Internet and the World Wide Web are growing at unprecedented rates. More and more teachers are authoring school or classroom web pages. Such pages have particular potential for use in rural areas by special educators, children with special needs, and the parents of children with special needs. The quality of many of these pages leaves much to be desired. All web pages, especially those authored by special educators should be accessible for people with disabilities. Many other problems complicate use of the web for all users, whether or not they have disabilities. By taking some simple steps, beginning webmasters can avoid these problems. This article discusses practical solutions to common accessibility problems and other problems seen commonly on the web.


Author(s):  
Kai-Hsiang Yang

This chapter will address the issues of Uniform Resource Locator (URL) correction techniques in proxy servers. The proxy servers are more and more important in the World Wide Web (WWW), and they provide Web page caches for browsing the Web pages quickly, and also reduce unnecessary network traffic. Traditional proxy servers use the URL to identify their cache, and it is a cache-miss when the request URL is non-existent in its caches. However, for general users, there must be some regularity and scope in browsing the Web. It would be very convenient for users when they do not need to enter the whole long URL, or if they still could see the Web content even though they forgot some part of the URL, especially for those personal favorite Web sites. We will introduce one URL correction mechanism into the personal proxy server to achieve this goal.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 980 ◽  
Author(s):  
V Sai Virajitha ◽  
Dr JKR Sastry ◽  
Dr V Chandra Prakash ◽  
P Srija ◽  
M Varun

WEB sites are playing very vital role in information dissemination. Most of the businesses are using their WEB sites to promote market and conduct business. The quality of the WEB sites has indirect relationship with quantum of business conduct by the industrial establishments. Quality of a web site is based on number of characteristics; computation of the same in quantitative terms is a complex process. Structure of a WEB site plays a vital role in hosting the content in most comprehensive manner.In this paper the subjecting of the WEB to data mining and determining the structures contained in the WEB site is presented. The structures are evaluated to find the quality of the same individually and also combined considering all the structures that are mined. A method is presented in this paper using which the quality of a web site is computed considering the structure of the WEB site alone. 


2018 ◽  
Vol 8 (4) ◽  
pp. 1-13
Author(s):  
Rajnikant Bhagwan Wagh ◽  
Jayantrao Bhaurao Patil

Recommendation systems are growing very rapidly. While surfing, users frequently miss the goal of their search and lost in information overload problem. To overcome this information overload problem, the authors have proposed a novel web page recommendation system to save surfing time of user. The users are analyzed when they surf through a particular web site. Authors have used relationship matrix and frequency matrix for effectively finding the connectivity among the web pages of similar users. These webpages are divided into various clusters using enhanced graph based partitioning concept. Authors classify active users more accurately to found clusters. Threshold values are used in both clustering and classification stages for more appropriate results. Experimental results show that authors get around 61% accuracy, 37% coverage and 46% F1 measure. It helps in improved surfing experience of users.


Sign in / Sign up

Export Citation Format

Share Document