conversational agents
Recently Published Documents





2022 ◽  
Vol 40 (4) ◽  
pp. 1-24
Yongqi Li ◽  
Wenjie Li ◽  
Liqiang Nie

In recent years, conversational agents have provided a natural and convenient access to useful information in people’s daily life, along with a broad and new research topic, conversational question answering (QA). On the shoulders of conversational QA, we study the conversational open-domain QA problem, where users’ information needs are presented in a conversation and exact answers are required to extract from the Web. Despite its significance and value, building an effective conversational open-domain QA system is non-trivial due to the following challenges: (1) precisely understand conversational questions based on the conversation context; (2) extract exact answers by capturing the answer dependency and transition flow in a conversation; and (3) deeply integrate question understanding and answer extraction. To address the aforementioned issues, we propose an end-to-end Dynamic Graph Reasoning approach to Conversational open-domain QA (DGRCoQA for short). DGRCoQA comprises three components, i.e., a dynamic question interpreter (DQI), a graph reasoning enhanced retriever (GRR), and a typical Reader, where the first one is developed to understand and formulate conversational questions while the other two are responsible to extract an exact answer from the Web. In particular, DQI understands conversational questions by utilizing the QA context, sourcing from predicted answers returned by the Reader, to dynamically attend to the most relevant information in the conversation context. Afterwards, GRR attempts to capture the answer flow and select the most possible passage that contains the answer by reasoning answer paths over a dynamically constructed context graph . Finally, the Reader, a reading comprehension model, predicts a text span from the selected passage as the answer. DGRCoQA demonstrates its strength in the extensive experiments conducted on a benchmark dataset. It significantly outperforms the existing methods and achieves the state-of-the-art performance.

2022 ◽  
Vol 22 (1) ◽  
pp. 1-22
Juanan Pereira ◽  
Óscar Díaz

Capstone projects usually represent the most significant academic endeavor with which students have been involved. Time management tends to be one of the hurdles. On top, University students are prone to procrastinatory behavior. Inexperience and procrastination team up for students failing to meet deadlines. Supervisors strive to help. Yet heavy workloads frequently prevent tutors from continuous involvement. This article looks into the extent to which conversational agents (a.k.a. chatbots) can tackle procrastination in single-student capstone projects. Specifically, chatbot enablers put in play include (1) alerts, (2) advice, (3) automatic rescheduling, (4) motivational messages, and (5) reference to previous capstone projects. Informed by Cognitive Behavioural Theory, these enablers are framed within the three phases involved in self-regulation misalignment: pre-actional, actional, and post-actional. To motivate this research, we first analyzed 77 capstone-project reports. We found that students’ Gantt charts (1) fail to acknowledge review meetings (70%) and milestones (100%) and (2) suffer deviations from the initial planned effort (16.28%). On these grounds, we develop GanttBot, a Telegram chatbot that is configured from the student’s Gantt diagram. GanttBot reminds students about close landmarks, it informs tutors when intervention might be required, and it learns from previous projects about common pitfalls, advising students accordingly. For evaluation purposes, course 17/18 acts as the control group ( N=28 ) while course 18/19 acts as the treatment group ( N=25 students). Using “overdue days” as the proxy for procrastination, results indicate that course 17/18 accounted for an average of 19 days of delay (SD = 5), whereas these days go down to 10 for the intervention group in course 18/19 (SD = 4). GanttBot is available for public usage as a Telegram chatbot.

2022 ◽  
Vol 63 ◽  
pp. 102469
Stefan Stieglitz ◽  
Lennart Hofeditz ◽  
Felix Brünker ◽  
Christian Ehnis ◽  
Milad Mirbabaie ◽  

Knowledge ◽  
2022 ◽  
Vol 2 (1) ◽  
pp. 55-87
Sargam Yadav ◽  
Abhishek Kaushik

Conversational systems are now applicable to almost every business domain. Evaluation is an important step in the creation of dialog systems so that they may be readily tested and prototyped. There is no universally agreed upon metric for evaluating all dialog systems. Human evaluation, which is not computerized, is now the most effective and complete evaluation approach. Data gathering and analysis are evaluation activities that need human intervention. In this work, we address the many types of dialog systems and the assessment methods that may be used with them. The benefits and drawbacks of each sort of evaluation approach are also explored, which could better help us understand the expectations associated with developing an automated evaluation system. The objective of this study is to investigate conversational agents, their design approaches and evaluation metrics. This approach can help us to better understand the overall process of dialog system development, and future possibilities to enhance user experience. Because human assessment is costly and time consuming, we emphasize the need of having a generally recognized and automated evaluation model for conversational systems, which may significantly minimize the amount of time required for analysis.

2022 ◽  
Vol 6 (GROUP) ◽  
pp. 1-22
Damaris Schmid ◽  
Dario Staehelin ◽  
Andreas Bucher ◽  
Mateusz Dolata ◽  
Gerhard Schwabe

Conversational agents (CA) have drawn increasing interest from HCI research. They have become popular in different aspects of our lives, for example, in the form of chatbots as the primary point of contact when interacting with an insurance company online. Additionally, CA find their way into collaborative settings in education, at work, or financial advisory. Researchers and practitioners are searching for ways to enhance the customer's experience in service encounters by deploying CA. Since competence is an important treat of a financial advisor, they only accept CA in their interaction with clients if it does not harm their impression on the client. However, we do not know how the social presence of the CA affects this perceived competence. We explore this by evaluating three prototypes with different social presences. For this, we conducted a video-based online survey. In contrast to prior studies focusing on single human-computer interaction, our study explores CA in a dyadic setting of two humans and one CA. First, our results support the Computers-Are-Social-Actors paradigm as the CA with a strong social presence was perceived as more competent than the other two designs. Second, our data show a positive correlation between CA's and advisor's competence. This implies a positive impact of the CA on the service encounter as the CA and advisor can be seen as a competent team.

2022 ◽  
Vol 6 (GROUP) ◽  
pp. 1-16
Aadesh Bagmar ◽  
Kevin Hogan ◽  
Dalia Shalaby ◽  
James Purtilo

The problems associated with open-ended group discussion are well-documented in sociology research. We seek to alleviate these issues using technology that autonomously serves as a discussion moderator. Building on top of an extensible framework called Diplomat, we develop a "conversational agent", ArbiterBot to promote efficiency, fairness, and professionalism in otherwise unstructured discussions. To evaluate the effectiveness of this agent, we recruited university students to participate in a study involving a series of prompted discussions over the Slack messenger app. The results of this study suggest that the conversational agent is effective at balancing contributions across participants, encouraging a timely consensus and promoting a higher coverage of topics. We believe that the results motivate further investigation into how conversational agents can be used to improve group discussion and cooperation.

2022 ◽  
Vol 2 ◽  
pp. 7
Tessa Beinema ◽  
Harm op den Akker ◽  
Dennis Hofs ◽  
Boris van Schooten

Health coaching applications can include (embodied) conversational agents as coaches. The development of these agents requires an interdisciplinary cooperation between eHealth application developers, interaction designers and domain experts. Therefore, proper dialogue authoring tools and tools to integrate these dialogues in a conversational agent system are essential in the process of creating successful agent-based applications. However, we found no existing open source, easy-to-use authoring tools that support multidisciplinary agent development. To that end, we developed the WOOL Dialogue Platform. The WOOL Dialogue Platform provides the eHealth and conversational agent communities with an open source platform, consisting of a set of easy to use tools that facilitate virtual agent development. The platform consists of a dialogue definition language, an editor, application development libraries and a web service. To illustrate the platform’s possibilities and use in practice, we describe two use cases from EU Horizon 2020 research projects. The WOOL Dialogue Platform is an ‘easy to use, and powerful if needed’ platform for the development of conversational agent applications that is seeing a slow but steady increase in uptake in the eHealth community. Developed to support dialogue authoring for embodied conversational agents in the health coaching domain, this platform’s strong points are its ease of use and ability to let domain experts and agents technology experts work together by providing all parties with tools that support their work effectively.

2022 ◽  
Vol 9 ◽  
Joseph Ollier ◽  
Marcia Nißen ◽  
Florian von Wangenheim

Background: Conversational agents (CAs) are a novel approach to delivering digital health interventions. In human interactions, terms of address often change depending on the context or relationship between interlocutors. In many languages, this encompasses T/V distinction—formal and informal forms of the second-person pronoun “You”—that conveys different levels of familiarity. Yet, few research articles have examined whether CAs' use of T/V distinction across language contexts affects users' evaluations of digital health applications.Methods: In an online experiment (N = 284), we manipulated a public health CA prototype to use either informal or formal T/V distinction forms in French (“tu” vs. “vous”) and German (“du” vs. “Sie”) language settings. A MANCOVA and post-hoc tests were performed to examine the effects of the independent variables (i.e., T/V distinction and Language) and the moderating role of users' demographic profile (i.e., Age and Gender) on eleven user evaluation variables. These were related to four themes: (i) Sociability, (ii) CA-User Collaboration, (iii) Service Evaluation, and (iv) Behavioral Intentions.Results: Results showed a four-way interaction between T/V Distinction, Language, Age, and Gender, influencing user evaluations across all outcome themes. For French speakers, when the informal “T form” (“Tu”) was used, higher user evaluation scores were generated for younger women and older men (e.g., the CA felt more humanlike or individuals were more likely to recommend the CA), whereas when the formal “V form” (“Vous”) was used, higher user evaluation scores were generated for younger men and older women. For German speakers, when the informal T form (“Du”) was used, younger users' evaluations were comparable regardless of Gender, however, as individuals' Age increased, the use of “Du” resulted in lower user evaluation scores, with this effect more pronounced in men. When using the formal V form (“Sie”), user evaluation scores were relatively stable, regardless of Gender, and only increasing slightly with Age.Conclusions: Results highlight how user CA evaluations vary based on the T/V distinction used and language setting, however, that even within a culturally homogenous language group, evaluations vary based on user demographics, thus highlighting the importance of personalizing CA language.

Sign in / Sign up

Export Citation Format

Share Document