conversational agent
Recently Published Documents


TOTAL DOCUMENTS

490
(FIVE YEARS 233)

H-INDEX

21
(FIVE YEARS 6)

2022 ◽  
Vol 6 (GROUP) ◽  
pp. 1-16
Author(s):  
Aadesh Bagmar ◽  
Kevin Hogan ◽  
Dalia Shalaby ◽  
James Purtilo

The problems associated with open-ended group discussion are well-documented in sociology research. We seek to alleviate these issues using technology that autonomously serves as a discussion moderator. Building on top of an extensible framework called Diplomat, we develop a "conversational agent", ArbiterBot to promote efficiency, fairness, and professionalism in otherwise unstructured discussions. To evaluate the effectiveness of this agent, we recruited university students to participate in a study involving a series of prompted discussions over the Slack messenger app. The results of this study suggest that the conversational agent is effective at balancing contributions across participants, encouraging a timely consensus and promoting a higher coverage of topics. We believe that the results motivate further investigation into how conversational agents can be used to improve group discussion and cooperation.


2022 ◽  
Vol 2 ◽  
pp. 7
Author(s):  
Tessa Beinema ◽  
Harm op den Akker ◽  
Dennis Hofs ◽  
Boris van Schooten

Health coaching applications can include (embodied) conversational agents as coaches. The development of these agents requires an interdisciplinary cooperation between eHealth application developers, interaction designers and domain experts. Therefore, proper dialogue authoring tools and tools to integrate these dialogues in a conversational agent system are essential in the process of creating successful agent-based applications. However, we found no existing open source, easy-to-use authoring tools that support multidisciplinary agent development. To that end, we developed the WOOL Dialogue Platform. The WOOL Dialogue Platform provides the eHealth and conversational agent communities with an open source platform, consisting of a set of easy to use tools that facilitate virtual agent development. The platform consists of a dialogue definition language, an editor, application development libraries and a web service. To illustrate the platform’s possibilities and use in practice, we describe two use cases from EU Horizon 2020 research projects. The WOOL Dialogue Platform is an ‘easy to use, and powerful if needed’ platform for the development of conversational agent applications that is seeing a slow but steady increase in uptake in the eHealth community. Developed to support dialogue authoring for embodied conversational agents in the health coaching domain, this platform’s strong points are its ease of use and ability to let domain experts and agents technology experts work together by providing all parties with tools that support their work effectively.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012039
Author(s):  
S Moulya ◽  
T R Pragathi

Abstract The aim of this work was to create a fully functional AI-ML based conversational agent that behaves like a real time therapist which analyses the user’s emotion at every step and provides appropriate responses and feedback. AI chatbots, although fairly new to the domain of mental health, can help in destigmatizing seeking help, and are more easily accessible to everyone, at any time. Chatbots provide an effective way to communicate with a user and offer helpful emotional support in a more economical way. While making regular psychiatric visits often require a fixed duration/appointment which can be time consuming and is restricted to a fraction of the day, the proposed chatbot can keep track of your health on the go at any time. The application will have a self-healing kit suggesting various exercises, both mental and physical that the user may implement in his day-to-day life. The study below goes into further detail on the major insinuations for future chatbot agent design and assessment


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-35
Author(s):  
Sam Hepenstal ◽  
Leishi Zhang ◽  
Neesha Kodagoda ◽  
B. l. william Wong

The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision-making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints; and brittleness, (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this article, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues. We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments, and our research has broader application than the use case discussed.


2021 ◽  
Author(s):  
Olga iCognito group ◽  
Andrey Zakharov

BACKGROUND In recent years there has been a growth of psychological chatbots performing important functions from checking symptoms to providing psychoeducation and guiding self-help exercises. Technologically these chatbots are based on traditional decision-tree algorithms with limited keyword recognition. A key challenge to the development of conversational artificial intelligence is intent recognition or understanding the goal that the user wants to accomplish. The user query on psychological topic is often emotional, highly contextual and non goal-oriented, and therefore may contain vague, mixed or multiple intents. OBJECTIVE In this study we attempt to identify and categorize user intents with relation to psychological topics. METHODS We collected a dataset of 43 000 logs from the iCognito Anti-depression chatbot which consists of user answers to the chatbot questions about the reason of their emotional distress. The data was labeled manually. The BERT model was used for classification. RESULTS We have identified 24 classes of user intents that can be grouped into larger categories, such as: a) intents to improve emotional state; b) intents to improve interpersonal relations; c) intents to improve physical condition; d) intents to solve practical problems; e) intents to make a decision; f) intents to harm oneself or commit suicide; g) intent to blame or criticize oneself. CONCLUSIONS This classification may be used for the development of conversational artificial intelligence in the field of psychotherapy.


Sign in / Sign up

Export Citation Format

Share Document