scholarly journals MAPSS: An Intelligent Architecture for the Pedagogical Support

Author(s):  
Najoua Hrich ◽  
Mohamed Lazaar ◽  
Mohamed Khaldi

The multi-agent systems (MAS) are a part of artificial intelligence (AI), they have emerged today in the development of major e-learning platforms. Their integration has given new impetus to learning environments by the possibility of integrating new parameters (psychological, pedagogical, ergonomic…) favoring a better adaptation to the learner. In addition, the multiagent approach offers the possibility to design flexible solutions based on a set of agents which are in continuous communication to accomplish the tasks entrusted to them. In this paper, we propose a model of pedagogical support based on a coupling of ontology and multi-agent systems for a synergy of their forces and the important contribution they can make to improve the learning-teaching process. Previous work has been the subject of theoretical foundation related to competency evaluation, and development of an ontology and an algorithm for evaluating competency. As a continuity, we present the design of Multiagent Pedagogical Support System (MaPSS) and the different scenarios of its utilization.

Author(s):  
Mehdi Dastani ◽  
Paolo Torroni ◽  
Neil Yorke-Smith

AbstractThe concept of anormis found widely across fields including artificial intelligence, biology, computer security, cultural studies, economics, law, organizational behaviour and psychology. The concept is studied with different terminology and perspectives, including individual, social, legal and philosophical. If a norm is an expected behaviour in a social setting, then this article considers how it can be determined whether an individual is adhering to this expected behaviour. We call this processmonitoring, and again it is a concept known with different terminology in different fields. Monitoring of norms is foundational for processes of accountability, enforcement, regulation and sanctioning. Starting with a broad focus and narrowing to the multi-agent systems literature, this survey addresses four key questions: what is monitoring, what is monitored, who does the monitoring and how the monitoring is accomplished.


2012 ◽  
pp. 1225-1233
Author(s):  
Christos N. Moridis ◽  
Anastasios A. Economides

During recent decades there has been an extensive progress towards several Artificial Intelligence (AI) concepts, such as that of intelligent agent. Meanwhile, it has been established that emotions play a crucial role concerning human reasoning and learning. Thus, developing an intelligent agent able to recognize and express emotions has been considered an enormous challenge for AI researchers. Embedding a computational model of emotions in intelligent agents can be beneficial in a variety of domains, including e-learning applications. However, until recently emotional aspects of human learning were not taken into account when designing e-learning platforms. Various issues arise when considering the development of affective agents in e-learning environments, such as issues relating to agents’ appearance, as well as ways for those agents to recognize learners’ emotions and express emotional support. Embodied conversational agents (ECAs) with empathetic behaviour have been suggested to be one effective way for those agents to provide emotional feedback to learners’ emotions. There has been some valuable research towards this direction, but a lot of work still needs to be done to advance scientific knowledge.


2020 ◽  
Vol 35 (1) ◽  
Author(s):  
Roberta Calegari ◽  
Giovanni Ciatto ◽  
Viviana Mascardi ◽  
Andrea Omicini

Abstract Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computer-scientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI”—in particular, logic-based ones—will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones.


Author(s):  
Krenare Pireva ◽  
Petros Kefalas ◽  
Dimitris Dranidis ◽  
Thanos Hatziapostolou ◽  
Anthony Cowling

Author(s):  
Antonio Fernández-Caballero ◽  
Victor López-Jaquero ◽  
Francisco Montero ◽  
Pascual González

AI Magazine ◽  
2018 ◽  
Vol 39 (4) ◽  
pp. 29-35
Author(s):  
Christopher Amato ◽  
Haitham Bou Ammar ◽  
Elizabeth Churchill ◽  
Erez Karpas ◽  
Takashi Kido ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, presented the 2018 Spring Symposium Series, held Monday through Wednesday, March 26–28, 2018, on the campus of Stanford University. The seven symposia held were AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents; Artificial Intelligence for the Internet of Everything; Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI; Data Efficient Reinforcement Learning; The Design of the User Experience for Artificial Intelligence (the UX of AI); Integrated Representation, Reasoning, and Learning in Robotics; Learning, Inference, and Control of Multi-Agent Systems. This report, compiled from organizers of the symposia, summarizes the research of five of the symposia that took place.


2012 ◽  
Vol 4 (1) ◽  
pp. 59-76 ◽  
Author(s):  
Haibin Zhu ◽  
Ming Hou ◽  
Mengchu Zhou

Adaptive Collaboration (AC) is essential for maintaining optimal team performance on collaborative tasks. However, little research has discussed AC in multi-agent systems. This paper introduces AC within the context of solving real-world team performance problems using computer-based algorithms. Based on the authors’ previous work on the Environment-Class, Agent, Role, Group, and Object (E-CARGO) model, a theoretical foundation for AC using a simplified model of role-based collaboration (RBC) is proposed. Several parameters that affect team performance are defined and integrated into a theorem, which showed that dynamic role assignment yields better performance than static role assignment. The benefits of implementing AC are further proven by simulating a “future battlefield” of remotely-controlled robotic vehicles; in this scenario, team performance clearly benefits from shifting vehicles (or roles) using a single controller. Related research is also discussed for future studies.


2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.


Sign in / Sign up

Export Citation Format

Share Document