Handbook of Research on Innovations in Database Technologies and Applications
Latest Publications


TOTAL DOCUMENTS

93
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781605662428, 9781605662435

Author(s):  
Christoph Bussler

As long as businesses only have one enterprise application or back end application system there is no need to share data with any other system in the company. All data that has to be managed is contained within one back end application system and its database. However, as businesses grow, more back end application systems find their way into their information technology infrastructure managing different specialized business data, mainly introduced due to the growth. These back end application systems are not independent of each other; in general they contain similar or overlapping business data or are part of business processes. Keeping the data in the various application systems consistent with each other requires their integration so that data can be exchanged or synchronized. The technology that supports the integration of various application systems and their databases is called Enterprise Application Integration (EAI) technology. EAI technology is able to connect to back end application systems in order to retrieve and to insert data. Once connected, EAI technology supports the definition of how extracted data is propagated to back end application systems solving the general integration problem.


Author(s):  
Michael Zoumboulakis ◽  
George Roussos

The concept of the so-called Pervasive and Ubiquitous Computing was introduced in the early nineties as the third wave of computing to follow the eras of the mainframe and the personal computer. Unlike previous technology generations, Pervasive and Ubiquitous Computing recedes into the background of everyday life: “it activates the world, makes computers so imbedded, so fitting, so natural, that we use it without even thinking about it, and is invisible, everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere” (Weiser 1991). Pervasive and Ubiquitous Computing is often referred to using different terms in different contexts. Pervasive, 4G mobile and sentient computing or ambient intelligence also refer to the same computing paradigm. Several technical developments come together to create this novel type of computing, the main ones are summarized in Table 1 (Davies and Gellersen 2002; Satyanarayanan 2001).


Author(s):  
Sergio Greco ◽  
Cristian Molinaro ◽  
Irina Trubitsyna ◽  
Ester Zumpano

It is well known that NP search and optimization problems can be formulated as DATALOG¬ (datalog with unstratified negation; Abiteboul, Hull, & Vianu, 1994) queries under nondeterministic stable-model semantics so that each stable model corresponds to a possible solution (Gelfond & Lifschitz, 1988; Greco & Saccà, 1997; Kolaitis & Thakur, 1994). Although the use of (declarative) logic languages facilitates the process of writing complex applications, the use of unstratified negation allows programs to be written that in some cases are neither intuitive nor efficiently valuable. This article presents the logic language NP Datalog, a restricted version of DATALOG¬ that admits only controlled forms of negation, such as stratified negation, exclusive disjunction, and constraints. NP Datalog has the same expressive power as DATALOG¬, enables a simpler and intuitive formulation for search and optimization problems, and can be easily translated into other formalisms. The example below shows how the vertex cover problem can be expressed in NP Datalog.


Author(s):  
Yingyuan Xiao

Recently, the demand for real-time data services has been increasing (Aslinger & Son, 2005). Many applications such as online stock trading, agile manufacturing, traffic control, target tracking, network management, and so forth, require the support of a distributed real-time database system (DRTDBS). Typically, these applications need predictable response time, and they often have to process various kinds of queries in a timely fashion. A DRTDBS is defined as a distributed database system within which transactions and data have timing characteristics or explicit timing constraints and system correctness that depend not only on the logic results but also on the time at which the logic results are produced. Similar to conventional real-time systems, transactions in DRTDBSs are usually associated with timing constraints. On the other hand, a DRTDBS must maintain databases for useful information, support the manipulation of the databases, and process transactions. Timing constraints of transactions in a DRTDBS are typically specified in the form of deadlines that require a transaction to be completed by a specified time. For soft realtime transactions, failure to meet a deadline can cause the results to lose their value, and for firm or hard real-time transactions, a result produced too late may be useless or harmful. DRTDBSs often process both temporal data that lose validity after their period of validity and persistent data that remain valid regardless of time. In order to meet the timing constraints of transactions and data, DRTDBSs usually adopt main memory database (MMDB) as their ground support. In an MMDB, “working copy” of a database is placed in the main memory, and a “secondary copy” of the database on disks serves as backup. Data I/O can be eliminated during a transaction execution by adopting an MMDB so that a substantial performance improvement can be achieved. We define a DRTDBS integrating MMDB as a distributed real-time main memory database system (DRTMMDBS).


Author(s):  
R. Manjunath

Expert systems have been applied to many areas of research to handle problems effectively. Designing and implementing an expert system is a difficult job, and it usually takes experimentation and experience to achieve high performance. The important feature of an expert system is that it should be easy to modify. They evolve gradually. This evolutionary or incremental development technique has to be noticed as the dominant methodology in the expert-system area. The simple evolutionary model of an expert system is provided in B. Tomic, J. Jovanovic, & V. Devedzic, 2006. Knowledge acquisition for expert systems poses many problems. Expert systems depend on a human expert to formulate knowledge in symbolic rules. The user can handle the expert systems by updating the rules through user interfaces (J. Jovanovic, D. Gasevic, V. Devedzic, 2004). However, it is almost impossible for an expert to describe knowledge entirely in the form of rules. An expert system may therefore not be able to diagnose a case that the expert is able to. The question is how to extract experience from a set of examples for the use of expert systems.


Author(s):  
Denis Shestakov

Finding information on the Web using a web search engine is one of the primary activities of today’s web users. For a majority of users results returned by conventional search engines are an essentially complete set of links to all pages on the Web relevant to their queries. However, currentday searchers do not crawl and index a significant portion of the Web and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the nonindexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages which embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale.


Author(s):  
Edgard Benítez-Guerrero ◽  
Omar Nieva-García

The vast amounts of digital information stored in databases and other repositories represent a challenge for finding useful knowledge. Traditionalmethods for turning data into knowledge based on manual analysis reach their limits in this context, and for this reason, computer-based methods are needed. Knowledge Discovery in Databases (KDD) is the semi-automatic, nontrivial process of identifying valid, novel, potentially useful, and understandable knowledge (in the form of patterns) in data (Fayyad, Piatetsky-Shapiro, Smyth & Uthurusamy, 1996). KDD is an iterative and interactive process with several steps: understanding the problem domain, data preprocessing, pattern discovery, and pattern evaluation and usage. For discovering patterns, Data Mining (DM) techniques are applied.


Author(s):  
Leonid Stoimenov

Research in information systems interoperability is motivated by the ever-increasing heterogeneity of the computer world. New generations of applications, such as geographic information systems (GISs), have much more demands in comparison to possibilities of legacy information systems and traditional database technology. The popularity of GIS in governmental and municipality institutions induce increasing amounts of available information (Stoimenov, Ðordevic-Kajan, & Stojanovic, 2000). In a local community environment (city services, local offices, local telecom, public utilities, water and power supply services, etc.), different information systems deal with huge amounts of available information, where most data in databases are geo-referenced. GIS applications often have to process geo-data obtained from various geo-information communities. Also, information that exists in different spatial database may be useful for many other GIS applications. Numerous legacy systems should be coupled with GIS systems, which present additional difficulties in developing end-user applications.


Author(s):  
Luciano Caroprese ◽  
Sergio Greco ◽  
Ester Zumpano

Recently, there have been several proposals that consider the integration of information and the computation of queries in an open-ended network of distributed peers (Bernstein, Giunchiglia, Kementsietsidis, Mylopulos, Serafini, & Zaihrayen, 2002; Calvanese, De Giacomo, Lenzerini, & Rosati, 2004; Franconi, Kuper, Lopatenko, & Zaihrayeu, 2003) as well as the problem of schema mediation and query optimization in P2P (peerto- peer) environments (Gribble, Halevy, Ives, Rodrig, & Suciu, 2001; Halevy, Ives, Suciu, & Tatarinov, 2003; Madhavan & Halevy, 2003; Tatarinov & Halevy, 2004). Generally, peers can both provide or consume data and the only information a peer participating in a P2P system has is about neighbors, that is, information about the peers that are reachable and can provide data of interest. More specifically, each peer joining a P2P system exhibits a set of mapping rules, in other words, a set of semantic correspondences to a set of peers that are already part of the system (neighbors). Thus, in a P2P system, the entry of a new source, or peer, is extremely simple as it just requires the definition of the mapping rules. By using mapping rules as soon as it enters the system, a peer can participate and access all data available in its neighborhood, and through its neighborhood it becomes accessible to all the other peers in the system.


Author(s):  
G. Shankaranarayanan ◽  
Adir Even

Maintaining data at a high quality is critical to organizational success. Firms, aware of the consequences of poor data quality, have adopted methodologies and policies for measuring, monitoring, and improving it (Redman, 1996; Eckerson, 2002). Today’s quality measurements are typically driven by physical characteristics of the data (e.g., item counts, time tags, or failure rates) and assume an objective quality standard, disregarding the context in which the data is used. The alternative is to derive quality metrics from data content and evaluate them within specific usage contexts. The former approach is termed as structure-based (or structural), and the latter, content-based (Ballou and Pazer, 2003). In this chapter we propose a novel framework to assess data quality within specific usage contexts and link it to data utility (or utility of data) - a measure of the value contribution associated with data within specific usage contexts. Our utility-driven framework addresses the limitations of structural measurements and offers alternative measurements for evaluating completeness, validity, accuracy, and currency, as well as a single measure that aggregates these data quality dimensions.


Sign in / Sign up

Export Citation Format

Share Document