scholarly journals A DSL for the development of software agents working within a semantic web environment

2013 ◽  
Vol 10 (4) ◽  
pp. 1525-1556 ◽  
Author(s):  
Sebla Demirkol ◽  
Moharram Challenger ◽  
Sinem Getir ◽  
Tomaz Kosar ◽  
Geylani Kardas ◽  
...  

Software agents became popular in the development of complex software systems, especially those requiring autonomous and proactive behavior. Agents interact with each other within a Multi-agent System (MAS), in order to perform certain defined tasks in a collaborative and/or selfish manner. However, the autonomous, proactive and interactive structure of MAS causes difficulties when developing such software systems. It is within this context, that the use of a Domain-specific Language (DSL) may support easier and quicker MAS development methodology. The impact of such DSL usage could be clearer when considering the development of MASs, especially those working on new challenging environments like the Semantic Web. Hence, this paper introduces a new DSL for Semantic Web enabled MASs. This new DSL is called Semantic web Enabled Agent Language (SEA_L). Both the SEA_L user-aspects and the way of implementing SEA_L are discussed in the paper. The practical use of SEA_L is also demonstrated using a case study which considers the modeling of a multi-agent based e-barter system. When considering the language implementation, we first discuss the syntax of SEA_L and we show how the specifications of SEA_L can be utilized during the code generation of real MAS implementations. The syntax of SEA_L is supported by textual modeling toolkits developed with Xtext. Code generation for the instance models are supplied with the Xpand tool.

2021 ◽  
Vol 35 (2) ◽  
Author(s):  
Matteo Baldoni ◽  
Federico Bergenti ◽  
Amal El Fallah Seghrouchni ◽  
Michael Winikoff

Author(s):  
Federico Bergenti ◽  
Enrico Franchi ◽  
Agostino Poggi

In this chapter, the authors describe the relationships between multi-agent systems, social networks, and the Semantic Web within collaborative work; they also review how the integration of multi-agent systems and Semantic Web technologies and techniques can be used to enhance social networks at all scales. The chapter first provides a review of relevant work on the application of agent-based models and abstractions to the key ingredients of our work: collaborative systems, the Semantic Web, and social networks. Then, the chapter discusses the reasons current multi-agent systems and their foreseen evolution might be a fundamental means for the realization of the future Semantic Social Networks. Finally, some conclusions are drawn.


2012 ◽  
pp. 211-218 ◽  
Author(s):  
Agostino Poggi ◽  
Michele Tomaiuolo

Expert systems are successfully applied to a number of domains. Often built on generic rule-based systems, they can also exploit optimized algorithms. On the other side, being based on loosely coupled components and peer to peer infrastructures for asynchronous messaging, multi-agent systems allow code mobility, adaptability, easy of deployment and reconfiguration, thus fitting distributed and dynamic environments. Also, they have good support for domain specific ontologies, an important feature when modelling human experts’ knowledge. The possibility of obtaining the best features of both technologies is concretely demonstrated by the integration of JBoss Rules, a rule engine efficiently implementing the Rete-OO algorithm, into JADE, a FIPA-compliant multi-agent system.


2009 ◽  
pp. 781-799
Author(s):  
David Camacho

The last decade has shown the e-business community and computer science researchers that there can be serious problems and pitfalls when e-companies are created. One of the problems is related to the necessity for the management of knowledge (data, information, or other electronic resources) from different companies. This chapter will focus on two important research fields that are currently working to solve this problem — Information Gathering (IG) techniques and Web-enabled Agent technologies. IG techniques are related to the problem of retrieval, extraction and integration of data from different (usually heterogeneous) sources into new forms. Agent and Multi-Agent technologies have been successfully applied in domains such as the Web. This chapter will show, using a specific IG Multi-Agent system called MAPWeb, how information gathering techniques have been successfully combined with agent technologies to build new Web agent-based systems. These systems can be migrated into Business- to-Consumer (B2C) scenarios using several technologies related to the Semantic Web, such as SOAP, UDDI or Web services.


2011 ◽  
pp. 236-276 ◽  
Author(s):  
Juan Pavon ◽  
Jorge J. Gomez-Sanz ◽  
Rubén Fuentes

INGENIAS provides a notation for modeling multi-agent systems (MAS) and a well-defined collection of activities to guide the development process of an MAS in the tasks of analysis, design, verification, and code generation, supported by an integrated set of tools—the INGENIAS Development Kit (IDK). These tools, as well as the INGENIAS notation, are based on five meta-models that define the different views and concepts from which a multi-agent system can be described. Using meta-models has the advantage of flexibility for evolving the methodology and adopting changes to the notation. In fact, one of the purposes in the conception of this methodology is to integrate progressive advances in agent technology, towards a standard for agent-based systems modeling that could facilitate the adoption of the agent approach by the software industry. The chapter presents a summary of the INGENIAS notation, development process, and support tools. The use of INGENIAS is demonstrated in an e-business case study. This case study includes concerns about the development process, modeling with agent concepts, and implementation with automated code generation facilities.


2016 ◽  
Vol 29 (5) ◽  
pp. 706-727 ◽  
Author(s):  
Mihalis Giannakis ◽  
Michalis Louis

Purpose Decision support systems are becoming an indispensable tool for managing complex supply chains. The purpose of this paper is to develop a multi-agent-based supply chain management system that incorporates big data analytics that can exert autonomous corrective control actions. The effects of the system on supply chain agility are explored. Design/methodology/approach For the development of the architecture of the system, a sequential approach is adopted. First three fundamental dimensions of supply chain agility are identified – responsiveness, flexibility and speed. Then the organisational design of the system is developed. The roles for each of the agents within the framework are defined and the interactions among these agents are modelled. Findings Applications of the model are discussed, to show how the proposed model can potentially provide enhanced levels in each of the dimensions of supply chain agility. Research limitations/implications The study shows how the multi-agent systems can assist to overcome the trade-off between supply chain agility and complexity of global supply chains. It also opens up a new research agenda for incorporation of big data and semantic web applications for the design of supply chain information systems. Practical implications The proposed information system provides integrated capabilities for production, supply chain event and disruption risk management under a collaborative basis. Originality/value A novel aspect in the design of multi-agent systems is introduced for inter-organisational processes, which incorporates semantic web information and a big data ontology in the agent society.


2021 ◽  
Vol 33 (5) ◽  
pp. 181-204
Author(s):  
Vladimir Frolov ◽  
Vadim Sanzharov ◽  
Vladimir Galaktionov ◽  
Alexander Shcherbakov

In this paper we propose a high-level approach to developing GPU applications based on the Vulkan API. The purpose of the work is to reduce the complexity of developing and debugging applications that implement complex algorithms on the GPU using Vulkan. The proposed approach uses the technology of code generation by translating a C++ program into an optimized implementation in Vulkan, which includes automatic shader generation, resource binding, and the use of synchronization mechanisms (Vulkan barriers). The proposed solution is not a general-purpose programming technology, but specializes in specific tasks. At the same time, it has extensibility, which allows to adapt the solution to new problems. For single input C++ program, we can generate several implementations for different cases (via translator options) or different hardware. For example, a call to virtual functions can be implemented either through a switch construct in a kernel, or through sorting threads and an indirect dispatching via different kernels, or through the so-called callable shaders in Vulkan. Instead of creating a universal programming technology for building various software systems, we offer an extensible technology that can be customized for a specific class of applications. Unlike, for example, Halide, we do not use a domain-specific language, and the necessary knowledge is extracted from ordinary C++ code. Therefore, we do not extend with any new language constructs or directives and the input source code is assumed to be normal C++ source code (albeit with some restrictions) that can be compiled by any C++ compiler. We use pattern matching to find specific patterns (or patterns) in C++ code and convert them to GPU efficient code using Vulkan. Pattern are expressed through classes, member functions, and the relationship between them. Thus, the proposed technology makes it possible to ensure a cross-platform solution by generating different implementations of the same algorithm for different GPUs. At the same time, due to this, it allows you to provide access to specific hardware functionality required in computer graphics applications. Patterns are divided into architectural and algorithmic. The architectural pattern defines the domain and behavior of the translator as a whole (for example, image processing, ray tracing, neural networks, computational fluid dynamics and etc.). Algorithmic pattern express knowledge of data flow and control and define a narrower class of algorithms that can be efficiently implemented in hardware. Algorithmic patterns can occur within architectural patterns. For example, parallel reduction, compaction (parallel append), sorting, prefix sum, histogram calculation, map-reduce, etc. The proposed generator works on the principle of code morphing. The essence of this approach is that, having a certain class in the program and transformation rules, one can automatically generate another class with the desired properties (for example, the implementation of the algorithm on the GPU). The generated class inherits from the input class and thus has access to all data and functions of the input class. Overriding virtual functions in generated class helps user to carefully connect generated code to the other Vulkan code written by hand. Shaders can be generated in two variants: OpenCL shaders for google “clspv” compiler and GLSL shaders for an arbitrary GLSL compiler. Clspv variant is better for code which intensively uses pointers and the GLSL generator is better if specific HW features are used (like hardware ray tracing acceleration). We have demonstrated our technology on several examples related to image processing and ray tracing on which we get 30-100 times acceleration over multithreaded CPU implementation.


Author(s):  
Gunjan Kalra

This chapter discusses the process of providing information in its most accurate, complete form to its users and the difficulties faced by the users of the current information systems. The chapter describes the impact of prevalent technologies such as the Multi-Agent Systems and the Semantic Web in the area of information supply via an example implementation and a model use case. The chapter offers a potentially more efficient and robust approach to information integration and supply process. The chapter intends to highlight the complexities inherent in the process of information supply and the role of emerging information technologies in solving these challenges.


Author(s):  
Maria João Viamonte

With the increasing importance of e-commerce across the Internet, the need for software agents to support both customers and suppliers in buying and selling goods/services is growing rapidly. It is becoming increasingly evident that in a few years the Internet will host a large number of interacting software agents. Most of them will be economically motivated, and will negotiate a variety of goods and services. It is therefore important to consider the economic incentives and behaviours of e-commerce software agents, and to use all available means to anticipate their collective interactions. Even more fundamental than these issues, however, is the very nature of the various actors that are involved in e-commerce transactions. This leads to different conceptualizations of the needs and capabilities, giving rise to semantic incompatibilities between them. Ontologies have an important role in Multi-Agent Systems communication and provide a vocabulary to be used in the communication between agents. It is hard to find two agents using precisely the same vocabulary. They usually have a heterogeneous private vocabulary defined in their own private ontology. In order to provide help in the conversation among different agents, we are proposing what we call ontology-services to facilitate agents’ interoperability. More specifically, we propose an ontology-based information integration approach, exploiting the ontology mapping paradigm, by aligning consumer needs and the market capacities, in a semi-automatic mode. We propose a new approach for the combination of the use of agent-based electronic markets based on Semantic Web technology, improved by the application and exploitation of the information and trust relationships captured by the social networks.


Sign in / Sign up

Export Citation Format

Share Document