Agent Technologies and Web Engineering
Latest Publications


TOTAL DOCUMENTS

28
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781605666181, 9781605666198

Author(s):  
Dunren Che

This article reports the result of the author’s recent work on XML query processing/optimization, which is a very important issue in XML data management. In this work, in order to more effectively and efficiently handle XML queries involving pure and/or negated containments, a previously proposed deterministic optimization approach is largely adapted. This approach resorts to heuristic-based deterministic transformations on algebraic query expressions in order to achieve the best possible optimization efficiency. Specialized transformation rules are thus developed, and efficient implementation algorithms for pure and negated containments are presented as well. Experimental study confirms the validity and effectiveness of the presented approach and algorithms in processing of XML queries involving pure and/or negated containments.


Author(s):  
Maytham Safar ◽  
Dariush Ebrahimi

The continuous K nearest neighbor (CKNN) query is an important type of query that finds continuously the KNN to a query point on a given path. We focus on moving queries issued on stationary objects in Spatial Network Database (SNDB) The result of this type of query is a set of intervals (defined by split points) and their corresponding KNNs. This means that the KNN of an object traveling on one interval of the path remains the same all through that interval, until it reaches a split point where its KNNs change. Existing methods for CKNN are based on Euclidean distances. In this paper we propose a new algorithm for answering CKNN in SNDB where the important measure for the shortest path is network distances rather than Euclidean distances. We propose DAR and eDAR algorithms to address CKNN queries based on the progressive incremental network expansion (PINE) technique. Our experiments show that the eDAR approach has better response time, and requires fewer shortest distance computations and KNN queries than approaches that are based on VN3 using IE.


Author(s):  
Emilie Conté ◽  
Guy Gouardères

In Vocational and Educational Training, new trends are to social learning and more precisely to informal learning. In such settings, the article introduces a process - the e-Qualification - to manage informal learning on the “Learning Grid”. It argues that this process must occur in a social context such as Virtual Communities. On the one hand, it describes their necessary characteristics and proprieties, which lead to the creation of a new kind of virtual community: the Virtual Learning Grid Community. On the other hand, e-qualification cannot occur without the help of a kind of user’s profile, which is called e-Portfolio. Moreover, the e-Portfolio is also a process, which is used to manage the Virtual Learning Grid Communities. The e-Qualification and the Virtual Learning Grid Communities’ management will probably rely on the co-operation of different distributed, autonomous, goal-oriented entities, which are Mobile Peer-to-Peer Agents. Furthermore, the authors hope that implementing these services will decrease the lacks of informal learning treatment on the Grid and will become the bases for new services on the Learning Grid.


Author(s):  
Vedran Podobnik ◽  
Krunoslav Trzec ◽  
Gordan Jezic

This paper presents an application of multi-agent system in ubiquitous computing scenarios characteristic of next-generation networks. Next-generation networks will create environments populated with a vast number of consumers, which will possess diverse types of context-aware devices. In such environments the consumer should be able to access all the available services anytime, from any place, and by using any of its communication-enabled devices. Consequently, next-generation networks will require efficient mechanisms which can match consumers’ demands (requested services) to network-operators’ supplies (available services). The authors propose an agent approach for enabling autonomous coordination between all the entities across the telecom value chain, thus enabling automated context-aware service provisioning for the consumers. Furthermore, the authors hope that the proposed service discovery model will not only be interesting from a scientific point of view, but also amenable to real-world applications.


Author(s):  
John Gekas ◽  
Maria Fasli

The Web services paradigm has enabled an increasing number of providers to host remotely accessible services. However, the true potential of such a distributed infrastructure can only be reached when such autonomic services can be combined together as parts of a workflow, in order to collectively achieve combined functionality. In this paper we present our work in the area of automatic workflow composition among Web services with semantically described functionality capabilities. For this purpose, we use a set of heuristics derived from the connectivity structure of the service repository in order to effectively guide the composition process. The methodologies presented in this paper have been inspired by research in areas such as graph network analysis, social network analysis and bibliometrics. In addition, we present comparative experimentation results in order to evaluate the presented techniques.


Author(s):  
Le Duy Ngane ◽  
Angela Goh ◽  
Cao Hoang Tru

Web services form the core of e-business and hence, have experienced a rapid development in the past few years. This has led to a demand for a discovery mechanism for Web services. Discovery is the most important task in the Web service model because Web services are useless if they cannot be discovered. A large number of Web service discovery systems have been developed. Universal Description, Discovery and Integration (UDDI) is a typical mechanism that stores indexes to Web services but it does not support semantics. Semantic Web service discovery systems that have been developed include systems that support matching Web services using the same ontology, systems that support matching Web services using different ontologies, and systems that support limitations of UDDI. This paper presents a survey of Web service discovery systems, focusing on systems that support semantics. The paper also elaborates on open issues relating to such discovery systems.


Author(s):  
H. A. Ali ◽  
Ali I. El Desouky ◽  
Ahmed I. Saleh

Web page classification is considered one of the most challenging research areas. Where the Web has a huge volume of unstructured and distributed documents that are related to a variety of domains; so, considering one base for the classification tasks will be extremely difficult. In addition, the Web is full of noise that will certainly harm the classifier performance especially if it is found in the classifier training data. Generally, it will be more valued to build domain-oriented classifiers (vertical classifi- ers) to classify pages related to a specific domain and compensate those classifiers with novel learning techniques to achieve better performance. The contribution of this paper is three edged; firstly, a novel learning technique called .Continuous Learning. is introduced. Secondly, the paper presents a new trend for Web page classification by presenting the domain-oriented classifiers (vertical classifiers). A new way of applying Bayes and K-Nearest Neighbor algorithms is introduced in order to build Domain Oriented Naïve Bayes (DONB) and Domain Oriented K-Nearest Neighbor (DOKNN) classifiers. The third contribution is combining both disciplines by introducing a novel classification strategy. Such strategy adds the continuous learning ability to Bayes theorem to build a Continuous learning domain oriented Naïve Bayes (CLNB) classifier. Where the overfitting problem has a great impact on most Web page classification techniques, continuous learning can be considered as a proposed solution. It allows the classifier to adapt itself continuously for achieving better performance. The proposed classifiers are tested; experimental results have shown that CLNB demonstrates significant performance improvement over both DONB and DOKNN where its accuracy goes beyond 94.1% after testing 1000 pages.


Author(s):  
Zhiyong Weng ◽  
Thomas Tran

This paper proposes a mobile, intelligent agent-based e-business architecture that allows buyers and sellers to perform business at remote locations. An e-business participant can generate a mobile, intelligent agent via some mobile devices (such as a personal digital assistant or mobile phone) and dispatch the agent to the Internet to do business on her behalf. This proposed architecture promises a number of benefits: First, it provides great convenience for traders as business can be conducted anytime and anywhere. Secondly, since the task of finding and negotiating with appropriate traders is handled by a mobile, intelligent agent, the user is freed from this time-consuming task. Thirdly, this architecture addresses the problem of limited and expensive connection time for mobile devices: A trader can disconnect her mobile device from its server after generating and launching a mobile, intelligent agent. Later on, she can reconnect and call back the agent for results, therefore minimizing the connection time. Finally, by complying with the standardization body FIPA, this flexible architecture increases the interoperability between agent systems and provides high scalability design for swiftly moving across the network.


Author(s):  
Carsten Stolz ◽  
Michael Barth

With growing importance of the internet, Web sites have to be continuously improved. Web metrics help to identify improvement potentials. Particularly success metrics for e-commerce sites based on transaction analysis are commonly available and well understood. In contrast to transaction based sites, the success of Web sites geared toward information delivery is harder to quantify since there is no direct feedback of the user. We propose a generic success measure for information driven Web sites. The idea of the measure is based on the observation of user behaviour in context of the Web site semantics. In particular we observe users on their way through the Web site and assign positive and negative scores to their actions. The value of the score depends on the transitions between page types and their contribution to the Web site’s objectives. To derive a generic view on the metric construction, we introduce a formal meta environment deriving success measures upon the relations and dependencies of usage, content and structure of a Web site. In a case study we got aware that in single cases unsatisfied users had been evaluated positively. This divergence could be explained by not having considered the user’s intentions. We propose in this approach to integrate search queries carried within referrer information as freely available information about the user’s intentions. We integrate this new source of information into our meta model of Web site structure, content and author intention. Hence we apply well understood techniques such as PLSA. Based on the latent semantic we construct a new indicator evaluating the Web site with respect to the user intention. In a case study we can show that this indicator evaluates the quality and usability of a Web site more accurately by taking the user’s goals under consideration. We can also show, that the initially mentioned diverging user sessions, can now be assessed according to the user’s perception.


Author(s):  
Ding-Yi Chen ◽  
Xue Li ◽  
Zhao Yang Dong ◽  
Xia Chen

In this paper, we propose a framework namely, Prediction-Learning-Distillation (PLD) for interactive document classification and distilling the misclassified documents. Whenever a user points out misclassified documents, the PLD learns from the mistakes and identifies the same mistakes from all other classified documents. The PLD then enforces this learning for future classifications. If the classifier fails to accept relevant documents or reject irrelevant documents on certain categories, then PLD will assign those documents as new positive/negative training instances. The classifier can then strengthen its weakness by learning from these new training instances. Our experiments results have demonstrated that the proposed algorithm can learn from user identified misclassified documents, and then distil the rest successfully.


Sign in / Sign up

Export Citation Format

Share Document