scholarly journals Prof. Watson: A Pedagogic Conversational Agent to Teach Programming in Primary Education

Proceedings ◽  
2019 ◽  
Vol 31 (1) ◽  
pp. 84
Author(s):  
Yeves-Martínez ◽  
Pérez-Marín

Teaching programming in Primary Education has recently attracted a great deal of research interest. One global trend is using multimedia languages such as Scratch. However, it was our belief that by using Pedagogic Conversational Agents that dialog with the students, they have to think how to solve given problems and to write the code to solve them. In particular, the MECOPROG methodology was applied to design the student-agent dialog in Prof. Watson. An experiment with 19 students (11–12 years old) was carried out proving the viability of the approach, which shed some light into alternative procedures to teach programming in Primary Education.

Author(s):  
José Miguel Ocaña ◽  
Elizabeth K. Morales-Urrutia ◽  
Diana Pérez-Marín ◽  
Silvia Tamayo-Moreno

Pedagogic conversational agents are computer applications that can interact with students in natural language. They have been used with satisfactory results on the instruction of several domains. The authors believe that they could also be useful for the instruction of computer science programming. Therefore, in this chapter, the MEDIE methodology is described to explain how to create an agent to teach programming to primary education children and develop their computational thinking. The main steps are to communicate with the teacher team, to validate the interface, and to validate the functionality, practical sessions, and evaluation. The first two steps are covered in this chapter.


2021 ◽  
Author(s):  
Marciane Mueller ◽  
Rejane Frozza ◽  
Liane Mählmann Kipper ◽  
Ana Carolina Kessler

BACKGROUND This article presents the modeling and development of a Knowledge Based System, supported by the use of a virtual conversational agent called Dóris. Using natural language processing resources, Dóris collects the clinical data of patients in care in the context of urgency and hospital emergency. OBJECTIVE The main objective is to validate the use of virtual conversational agents to properly and accurately collect the data necessary to perform the evaluation flowcharts used to classify the degree of urgency of patients and determine the priority for medical care. METHODS The agent's knowledge base was modeled using the rules provided for in the evaluation flowcharts comprised by the Manchester Triage System. It also allows the establishment of a simple, objective and complete communication, through dialogues to assess signs and symptoms that obey the criteria established by a standardized, validated and internationally recognized system. RESULTS Thus, in addition to verifying the applicability of Artificial Intelligence techniques in a complex domain of health care, a tool is presented that helps not only in the perspective of improving organizational processes, but also in improving human relationships, bringing professionals and patients closer. The system's knowledge base was modeled on the IBM Watson platform. CONCLUSIONS The results obtained from simulations carried out by the human specialist allowed us to verify that a knowledge-based system supported by a virtual conversational agent is feasible for the domain of risk classification and priority determination of medical care for patients in the context of emergency care and hospital emergency.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Giovanni Pilato ◽  
Agnese Augello ◽  
Salvatore Gaglio

The paper illustrates a system that implements a framework, which is oriented to the development of a modular knowledge base for a conversational agent. This solution improves the flexibility of intelligent conversational agents in managing conversations. The modularity of the system grants a concurrent and synergic use of different knowledge representation techniques. According to this choice, it is possible to use the most adequate methodology for managing a conversation for a specific domain, taking into account particular features of the dialogue or the user behavior. We illustrate the implementation of a proof-of-concept prototype: a set of modules exploiting different knowledge representation methodologies and capable of managing different conversation features has been developed. Each module is automatically triggered through a component, named corpus callosum, that selects in real time the most adequate chatbot knowledge module to activate.


Author(s):  
Diana Pérez-Marín ◽  
Antonio Boza

Pedagogic Conversational Agents are computer applications that can interact with students in natural language. They have been used with satisfactory results on the instruction of several domains. The authors believe that they could also be useful for the instruction of Secondary Physics and Chemistry Education. Therefore, in this paper, the authors present a procedure to create an agent for that domain. First, teachers have to introduce the exercises with their correct answers. Secondly, students will be presented the exercises, and if the students know the answer, and if it is correct, more difficult exercises will be presented. Otherwise, step-by-step natural language support will be provided to guide the student towards the solution. It is the authors’ hypothesis that this innovative teaching method will be satisfactory and useful for teachers and students, and that by following the procedure more computer programmers can be encouraged to develop agents for other domains to be used by teachers and students at class.


2020 ◽  
pp. 070674372096642
Author(s):  
Aditya Nrusimha Vaidyam ◽  
Danny Linggonegoro ◽  
John Torous

Objective: The need for digital tools in mental health is clear, with insufficient access to mental health services. Conversational agents, also known as chatbots or voice assistants, are digital tools capable of holding natural language conversations. Since our last review in 2018, many new conversational agents and research have emerged, and we aimed to reassess the conversational agent landscape in this updated systematic review. Methods: A systematic literature search was conducted in January 2020 using the PubMed, Embase, PsychINFO, and Cochrane databases. Studies included were those that involved a conversational agent assessing serious mental illness: major depressive disorder, schizophrenia spectrum disorders, bipolar disorder, or anxiety disorder. Results: Of the 247 references identified from selected databases, 7 studies met inclusion criteria. Overall, there were generally positive experiences with conversational agents in regard to diagnostic quality, therapeutic efficacy, or acceptability. There continues to be, however, a lack of standard measures that allow ease of comparison of studies in this space. There were several populations that lacked representation such as the pediatric population and those with schizophrenia or bipolar disorder. While comparing 2018 to 2020 research offers useful insight into changes and growth, the high degree of heterogeneity between all studies in this space makes direct comparison challenging. Conclusions: This review revealed few but generally positive outcomes regarding conversational agents’ diagnostic quality, therapeutic efficacy, and acceptability, which may augment mental health care. Despite this increase in research activity, there continues to be a lack of standard measures for evaluating conversational agents as well as several neglected populations. We recommend that the standardization of conversational agent studies should include patient adherence and engagement, therapeutic efficacy, and clinician perspectives.


2020 ◽  
Vol 34 (10) ◽  
pp. 13710-13711
Author(s):  
Billal Belainine ◽  
Fatiha Sadat ◽  
Hakim Lounis

Chatbots or conversational agents have enjoyed great popularity in recent years. They surprisingly perform sensitive tasks in modern societies. However, despite the fact that they offer help, support, and fellowship, there is a task that is not yet mastered: dealing with complex emotions and simulating human sensations. This research aims to design an architecture for an emotional conversation agent for long-text conversations (multi-turns). This agent is intended to work in areas where the analysis of users feelings plays a leading role. This work refers to natural language understanding and response generation.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-35
Author(s):  
Sam Hepenstal ◽  
Leishi Zhang ◽  
Neesha Kodagoda ◽  
B. l. william Wong

The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision-making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints; and brittleness, (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this article, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues. We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments, and our research has broader application than the use case discussed.


2021 ◽  
Vol 3 ◽  
Author(s):  
Astrid Carolus ◽  
Carolin Wienrich ◽  
Anna Törke ◽  
Tobias Friedel ◽  
Christian Schwietering ◽  
...  

Conversational agents and smart speakers have grown in popularity offering a variety of options for use, which are available through intuitive speech operation. In contrast to the standard dyad of a single user and a device, voice-controlled operations can be observed by further attendees resulting in new, more social usage scenarios. Referring to the concept of ‘media equation’ and to research on the idea of ‘computers as social actors,’ which describes the potential of technology to trigger emotional reactions in users, this paper asks for the capacity of smart speakers to elicit empathy in observers of interactions. In a 2 × 2 online experiment, 140 participants watched a video of a man talking to an Amazon Echo either rudely or neutrally (factor 1), addressing it as ‘Alexa’ or ‘Computer’ (factor 2). Controlling for participants’ trait empathy, the rude treatment results in participants’ significantly higher ratings of empathy with the device, compared to the neutral treatment. The form of address had no significant effect. Results were independent of the participants’ gender and usage experience indicating a rather universal effect, which confirms the basic idea of the media equation. Implications for users, developers and researchers were discussed in the light of (future) omnipresent voice-based technology interaction scenarios.


Author(s):  
Robert R Morris ◽  
Kareem Kouddous ◽  
Rohan Kshirsagar ◽  
Stephen M Schueller

BACKGROUND Conversational agents cannot yet express empathy in nuanced ways that account for the unique circumstances of the user. Agents that possess this faculty could be used to enhance digital mental health interventions. OBJECTIVE We sought to design a conversational agent that could express empathic support in ways that might approach, or even match, human capabilities. Another aim was to assess how users might appraise such a system. METHODS Our system used a corpus-based approach to simulate expressed empathy. Responses from an existing pool of online peer support data were repurposed by the agent and presented to the user. Information retrieval techniques and word embeddings were used to select historical responses that best matched a user’s concerns. We collected ratings from 37,169 users to evaluate the system. Additionally, we conducted a controlled experiment (N=1284) to test whether the alleged source of a response (human or machine) might change user perceptions. RESULTS The majority of responses created by the agent (2986/3770, 79.20%) were deemed acceptable by users. However, users significantly preferred the efforts of their peers (P<.001). This effect was maintained in a controlled study (P=.02), even when the only difference in responses was whether they were framed as coming from a human or a machine. CONCLUSIONS Our system illustrates a novel way for machines to construct nuanced and personalized empathic utterances. However, the design had significant limitations and further research is needed to make this approach viable. Our controlled study suggests that even in ideal conditions, nonhuman agents may struggle to express empathy as well as humans. The ethical implications of empathic agents, as well as their potential iatrogenic effects, are also discussed.


Author(s):  
Cristina Catalán Aguirre ◽  
Nuria González-Castro ◽  
Carlos Delgado Kloos ◽  
Carlos Alario-Hoyos ◽  
Pedro Muñoz-Merino

One important problem in MOOCs is the lack of personalized support from teachers. Conversational agents arise as one possible solution to assist MOOC learners and help them to study. For example, conversational agents can help review key concepts of the MOOC by asking questions to the learners and providing examples. JavaPAL, a voice-based conversational agent for supporting learners on a MOOC on programming with Java offered on edX. This paper evaluates JavaPAL from different perspectives. First, the usability of JavaPAL is analyzed, obtaining a score of 74.41 according to a System Usability Scale (SUS). Second, learners? performance is compared when answering questions directly through JavaPAL and through the equivalent web interface on edX, getting similar results in terms of performance. Finally, interviews with JavaPAL users reveal that this conversational agent can be helpful as a complementary tool for the MOOC due to its portability and flexibility compared to accessing the MOOC contents through the web interface.


Sign in / Sign up

Export Citation Format

Share Document