Handbook of Research on Web Information Systems Quality
Latest Publications


TOTAL DOCUMENTS

30
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781599048475, 9781599048482

Author(s):  
Xiannong Meng

This chapter surveys various technologies involved in a Web search engine with an emphasis on performance analysis issues. The aspects of a general-purpose search engine covered in this survey include system architectures, information retrieval theories as the basis of Web search, indexing and ranking of Web documents, relevance feedback and machine learning, personalization, and performance measurements. The objectives of the chapter are to review the theories and technologies pertaining to Web search, and help us understand how Web search engines work and how to use the search engines more effectively and efficiently.


Author(s):  
Mª Ángeles Moraga ◽  
Ignacio García-Rodríguez de Guzmán ◽  
Coral Calero ◽  
Mario Piattini

The use of Web portals continues to rise, showing their importance in the current information society. Specifically, this chapter focuses on portlet-based portals. Portlets are Web components, and they can be thought of as COTS but in a Web setting. Recently, the Web service for remote portlets (WSRP) standard has come into existence. Its aim is to provide a common interface in order to allow the communication between portal and portlets. Bearing all that in mind, in this chapter we propose an ontology for this standard. This ontology offers an understandable summary of the standard. Thus, the ontology leads both portlet and portal developers to focus their effort on developing the portlet domain logic instead of implementing its communication.


Author(s):  
Sergej Sizov ◽  
Stefan Siersdorfer

This chapter addresses the problem of automatically organizing heterogeneous collections of Web documents for the generation of thematically-focused expert search engines and portals. As a possible application scenario for our techniques, we consider a focused Web crawler that aims to populate topics of interest by automatically categorizing newly-fetched documents. A higher accuracy of the underlying supervised (classification) and unsupervised (clustering) methods is achieved by leaving out uncertain documents rather than assigning them to inappropriate topics or clusters with low confidence. We introduce a formal probabilistic model for ensemble-based meta methods and explain how it can be used for constructing estimators and for quality-oriented tuning. Furthermore, we provide a comprehensive experimental study of the proposed meta methodology and realistic use-case examples.


Author(s):  
Marta Fernández de Arriba ◽  
Eugenia Díaz ◽  
Jesús Rodríguez Pérez

This chapter presents the structure of an index which serves as support so allowing the development team to create the specification of the context of use document for the development of Web applications, bearing in mind characteristics of usability and accessibility, each point of the index being explained in detail. A correct preparation of this document ensures the quality of the developed Web applications. The international rules and standards related to the identification of the context of use have been taken into account. Also, the functionality limitations (sensorial, physical, or cognitive) which affect access to the Web are described, as well as the technological environment used by disabled people (assistive technologies or alternative browsers) to facilitate their access to the Web content. Therefore, following the developed specification of the context of use, usable and accessible Web applications with their corresponding benefits can be created.


Author(s):  
Thomas Mandl

Automatic quality assessment of Web pages needs to complement human information work in the current situation of an information overload. Several systems for this task have been developed and evaluated. Automatic quality assessments are most often based on the features of a Web page itself or on external information. Promising results have been achieved by systems learning to associate human judgments with Web page features. Automatic evaluation of Internet resources according to various quality criteria is a new research field emerging from several disciplines. This chapter presents the most prominent systems and prototypes implemented so far and analyzes the knowledge sources exploited for these approaches.


Author(s):  
Emilia Mendes ◽  
Silvia Abrahão

Effort models and effort estimates help project managers allocate resources, control costs and schedule, and improve current practices, leading to projects that are finished on time and within budget. In the context of Web development and maintenance, these issues are also crucial, and very challenging, given that Web projects have short schedules and a highly fluidic scope. Therefore, the objective of this chapter is to introduce the concepts related to Web effort estimation and effort estimation techniques. In addition, this chapter also details and compares, by means of a case study, three effort estimation techniques, chosen for this chapter because they have been to date the ones mostly used for Web effort estimation: Multivariate regression, Case-based reasoning, and Classification and Regression Trees. The case study uses data on industrial Web projects from Spanish Web companies.


Author(s):  
Jengchung V. Chen ◽  
Wen-Hsiang Lu ◽  
Kuan-Yu He ◽  
Yao-Sheng Chang

With the fast growth of the Web, users often suffer from the problem of information overload, since many existing search engines respond to queries with many nonrelevant documents containing query terms based on the conventional search mechanism of keyword matching. In fact, both users and search engine developers had anticipated that this mechanism would reduce information overload by understanding user goals clearly. In this chapter, we will introduce some past research in Web search, and current trends focusing on how to improve the search quality in different perspectives of “what”, “how”, “where”, “when”, and “why”. Additionally, we will also briefly introduce some effective search quality improvements using link-structure-based search algorithms, such as PageRank and HITS. At the end of this chapter, we will introduce the idea of our proposed approach to improving search quality, which employs syntactic structures (verb-object pairs) to automatically identify potential user goals from search-result snippets. We also believe that understanding user goals more clearly and reducing information overload will become one of the major developments in commercial search engines in the future, since the amounts of information and resources continue to increase rapidly, and user needs will become more and more diverse.


Author(s):  
Tony C. Shan ◽  
Winnie W. Hua

This article defines a comprehensive set of guiding principles, called philosophy of architecture design (PAD), as a means of coping with the architecture design complexity and managing the architectural assets of Web information systems in a service-oriented paradigm. This coherent model comprises a multidimensional collection of key guiding principles and criteria in system analysis, modeling, design, development, testing, deployment, operations, management, and governance. The systematic framework provides a multidisciplinary view of the design tenets, idioms, principles, and styles (TIPS) in the IT architecting practices for engineering process and quality assurance. There are 26 constituent elements defined in the scheme, the names of which form an array of A-Z using the first letter. The characteristics and applicability of all 26 dimensions in the PAD model are articulated in detail. Recommendations and future trends are also presented in the context. This overarching model has been extensively leveraged in one format or another to design a wide range of Web-based systems in various industry sectors.


Author(s):  
M.J. Escalona ◽  
G. Aragón

The increasing complexity and the many different aspects that should be treated at the same time require flexible but powerful methodologies to support the development process. Every day, the requirements treatment in Web environments is becoming a more critical phase because developers need suitable methods to capture, define, and validate requirements. However, it is very important that these methods assure the quality of these requirements. The model-driven engineering is opening a new way to define methodological approaches that allow control and relate concepts that have to be treated. This chapter presents a Web methodological approach to deal with requirements, NDT (navigational development techniques) based on model-driven engineering. As it is presented, NDT proposes a set of procedures, techniques, and models to assure the quality of results in the Web requirements treatment.


Author(s):  
Fernando Bellas ◽  
Iñaki Paz ◽  
Alberto Pan ◽  
Óscar Díaz

Portlets are interactive Web mini-applications that can be plugged into a portal. This chapter focuses on “portletizing” existing Web applications, that is, wrapping them as portlets, without requiring any modification. After providing some background on portlet technology, we discuss two kinds of approaches to portletization: automatic and annotation-based. Automatic approaches make use of heuristics to automatically choose the fragments of the Web application pages to be displayed into the space available in the portlet’s window. In turn, in annotation-based approaches, it is the portal administrator who annotates each page of the portletized Web application to specify which fragments should be displayed. Annotation-based approaches also allow to supplement the functionality of the original Web application. Each approach is explained by using a sample scenario based on the same Web application. We also pinpoint the advantages and shortcomings of each approach, and outline future trends in portletization.


Sign in / Sign up

Export Citation Format

Share Document