Information Modeling for Internet Applications
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781591400509, 9781591400943

Author(s):  
Filippo Ricca ◽  
Paolo Tonella

The World Wide Web has become an interesting opportunity for companies to deliver services and products at distance. Correspondingly, the quality of Web applications, responsible for the related transactions, has become a crucial factor. It can be improved by properly modeling the application during its design, but if the whole life cycle is considered, the availability of a consistent model of the application is fundamental also during maintenance and testing. In this chapter, the problem of recovering a model of a Web application from the implementation is faced. Algorithms are provided to obtain it even in presence of a highly dynamic structure. Based upon such a model, several static analysis techniques, among which reaching definitions and slicing, are considered, as well as some restructuring techniques. White box testing exploits the model in that the related coverage levels are based on it, while statistical testing assumes that transitions in the model are labeled with the conditional probabilities of being traversed.


Author(s):  
Abad Shah

Today, the Internet and the Web are the most amazingly and dynamically growing computer technologies. The number of users accessing the Web is growing exponentially all over the world. The Web has become a popular environment for new generation of interactive computer applications called Web (or hypermedia) application. The Web applications (WAs) have special characteristics that have made them different from other traditional applications. Hence, many design methodologies for the development of WAs have been proposed. However, most of these methodologies concentrate on the design aspects of applications, and they often do not strictly follow any software development life-cycle model such as the WaterFall software development life-cycle model. In this chapter, we propose an object-oriented design methodology for the development of WAs. The main features of this proposed methodology are that it follows WaterFall model and captures the operations in objects of the applications; thus making the methodology an object-oriented methodology.


Author(s):  
Jaime Gomez ◽  
Cristina Cachero

The mostly “creative” authoring process used to develop many Web applications during the last years has already proven unsuccessful to tackle, with its increasing complexity, both in terms of user and technical requirements. This fact has nurtured a mushrooming of proposals, most based on conceptual models, that aim at facilitating the development, maintenance and assessment of Web applications, thus improving the reliability of the Web development process. In this chapter, we will show how traditional software engineering approaches can be extended to deal with the Web idiosyncrasy, taking advantage of proven successful notation and techniques for common tasks, while adding models and constructs needed to capture the nuances of the Web environment. In this context, our proposal, the Object-Oriented Hypermedia (OO-H) Method, developed at University of Alicante, provides a set of new views that extend UML to provide a Web interface model. A code generation process is able to, departing from such diagrams and their associated tagged values, generate a Web interface capable of connecting to underlying business modules.


Author(s):  
Joao Cavalcanti ◽  
David Robertson

The continuing increase in size and complexity of Web sites has turned their design and construction into a challenging problem. Systematic approaches can bring many benefits to Web site construction, making development more methodical and maintenance less time consuming. Computational logic can be successfully used to tackle this problem, as it supports declarative specifications and reasoning about specifications in a more natural way. Computational logic also offers metaprogramming capabilities that can be used to develop methods for automated Web site synthesis. This chapter presents an approach to Web site synthesis based on computational logic and discusses in more detail two important features of the proposed approach: the support for property checking and integrity constraint specification and verification.


Author(s):  
Athman Bouguettaya ◽  
Boualem Benatallah ◽  
Brahim Medjahed ◽  
Mourad Ouzzani ◽  
Lily Hendra

The evolution into the global information infrastructure and the concomitant increase in the available information on the Web, is offering a powerful distribution vehicle for organizations that need to coordinate the use of multiple information sources. However, the technology to organize, search, integrate, and evolve these sources has not kept pace with the rapid growth of the available information space. In this chapter, we present our work in the WebFINDIT project. WebFINDIT aims to achieve the scalable integration and efficient querying of Web-accessible databases through the incremental data-driven discovery and formation of interrelationships between information sources. WebFINDIT uses an ontological organization of the information space to filter interactions and accelerate service searches. More precisely, the information space is organized as domain-specific groups. Each group forms a database community to represent the domain of interest of the related databases. Additionally, WebFINDIT provides a monitoring mechanism to dynamically alter relationships between different database communities. This is achieved by using distributed agents that work as background processes. They continually gather and evaluate information about the intercommunity relationships to recommend changes. A prototype has been fully implemented in the context of a healthcare application.


Author(s):  
Roelof van Zwol ◽  
Peter M.G. Apers

The main objective of this chapter is to present the Webspace Method for modeling and querying Web-based document collections. When focusing on limited domains of the Internet, like intranets, digital libraries, and large Web sites, document collections can be found that have a high multimedia and semistructured character. Furthermore, the content of such document collections can be assumed to contain related information. Based on conceptual modelling, the Webspace Method introduces a new schema-based approach to enhance the precision, when searching such document collections. The Webspace Method allows queries to be composed that combine information stored in several documents to satisfy the user’s information need, whereas traditional search engines are only capable of querying a single document at a time. Furthermore, a query over a Webspace allows a user to formulate his information need as the result of a query over a Webspace directly, rather than a collection of URLs pointing to the possible relevant documents.


Author(s):  
Evimaria Terzi ◽  
Mohand-Said Hacid ◽  
Athena Vakali

The efficient and sophisticated representation of the structure of the documents being circulated over the Internet allows for effective querying and reasoning over them. This is a major goal for large information resources like the World Wide Web (WWW). Constraints are a valuable tool for managing information. In this work, we consider how constraint-based technology can be used to query and reason about semistructured data represented using the constraint-logic implied representation models. The constraint system, FT£ , provides information-ordering constraints interpreted over feature trees. Based on this approach, we show how a generalization of FT£ combined with path constraints can be used to formally represent and state constraints and reason over semistructured data. The proposed query language is extended to facilitate query relaxation when the exact solution to the query cannot be obtained from the data repository. The applicability of the above framework, proposed for semistructured data, is examined for a particular case regarding XML documents circulated and stored over the Web.


Author(s):  
Yin Leng Theng

This chapter reviews continuing usability problems with hypertext and Web applications and highlights new issues, in particular, cultural and ethical, brought about due to internationalisation. It argues for a move away from treatment to prevention, from treating the end-user’s symptoms — themselves a reaction to bad design — to avoiding the bad design. Therefore, the way hypertexts and the Web are designed and built needs to be re-examined. It suggests that new approaches to Web modelling are required to address usability issues that might be due to human errors or design problems. This chapter concludes by suggesting several practical and theoretical contributions to address the deficiencies in current hypertext and Web design.


Author(s):  
Christos Bouras ◽  
Agisilaos Konidaris

This chapter presents a step-by-step approach to the design, implementation and management of a Data-Intensive Web Site (DIWS). The approach introduces five data formulation and manipulation graphs that are presented analytically. The core concept behind the modeling approach is that of “Web fragments,” that is an information decomposition technique that aids design, implementation and management of DIWS. We then present the steps that must be followed in order to “build” a DIWS based on Web fragments. Finally, we show how our approach can be used to ensure the basic DIWS user requirements of personalization, integrity and performance.


Author(s):  
Hyoil Han ◽  
Ramez Elmasri

A lot of work has been done in the area of extracting data content from the Web, but less attention has been given to extracting the conceptual schemas or ontologies of underlying Web pages. The goal of the WebOntEx (Web ontology extraction) project is to make progress toward semiautomatically extracting Web ontologies by analyzing a set of Web pages that are in the same application domain. The ontology is considered a complete schema of the domain concepts. Our ontology metaconcepts are based on the extended entity-relationship (EER) model. The concepts are classified into entity types, relationships, attributes, and superclass/subclass hierarchies. WebOntEx attempts to extract ontology concepts by analyzing the use of HTML tags and by utilizing Part-of-Speech tagging. WebOntEx applies heuristic rules and machine learning techniques, in particular, inductive logic programming (ILP).


Sign in / Sign up

Export Citation Format

Share Document