User Expectations of Social Robots in Different Applications: An Online User Study

Author(s):  
Xiao Dou ◽  
Chih-Fu Wu ◽  
Xi Wang ◽  
Jin Niu
Author(s):  
Karen Fucinato ◽  
Elena Lavinia Leustean ◽  
Lilla Fekecs ◽  
Tünde Tárnoková ◽  
Rosalyn M. Langedijk ◽  
...  

Author(s):  
Frank Hegel ◽  
Manja Lohse ◽  
Agnes Swadzba ◽  
Sven Wachsmuth ◽  
Katharina Rohlfing ◽  
...  
Keyword(s):  

2019 ◽  
Vol 10 (1) ◽  
pp. 140-159 ◽  
Author(s):  
Zachary Henkel ◽  
Kenna Baugus ◽  
Cindy L. Bethel ◽  
David C. May

AbstractThis article describes ethical issues related to the design and use of social robots in sensitive contexts like psychological interventions and provides insights from one user design study and two controlled experiments with adults and children. User expectations regarding privacy with a therapeutic robotic dog, Therabot, gathered from a 16 participant design study are presented. Furthermore, results from 142 forensic interviews about bullying experiences conducted with children (ages 8 to 17) using three different social robots (Nao, Female RoboKind, Male RoboKind) and humans (female and male) as forensic interviewers are examined to provide insights into child beliefs about privacy and social judgment in sensitive interactions with social robots. The data collected indicates that adult participants felt a therapeutic robotic dog would be most useful for children in comparison to other age groups, and should include privacy safeguards. Data obtained from children after a forensic interview about their bullying experiences shows that they perceive social robots as providing significantly more socially protective factors than adult humans. These findings provide insight into how children perceive social robots and illustrate the need for careful considerationwhen designing social robots that will be used in sensitive contexts with vulnerable users like children.


Crisis ◽  
2020 ◽  
pp. 1-9
Author(s):  
Kelly Mazzer ◽  
Megan O'Riordan ◽  
Alan Woodward ◽  
Debra Rickwood

Abstract. Background: Crisis support services play an important role in providing free, immediate access to support people in the community experiencing a personal crisis. Recently, services have expanded from telephone to digital modalities including online chat and text message services. This raises the question of what outcomes are being achieved for increasingly diverse service users across different modalities. Aims: This systematic review aimed to determine the expectations and outcomes of users of crisis support services across three modalities (telephone, online chat, and text message/SMS). Method: Online databases (CINAHL, MEDLINE, PsycARTICLES, PsycINFO, Psychological and Behavioural Sciences Collection) and gray literature were searched for studies measuring expectations and outcomes of crisis support services. Results: A total of 31 studies were included in the review, the majority of which were telephone-based. Similar expectations were found for telephone and online chat modalities, as well as consistently positive outcomes, measured by changes in emotional state, satisfaction, and referral plans. Limitations/Conclusion: There is a paucity of consistent outcome measures across and within modalities and limited research about users of text message/SMS services.


2020 ◽  
Author(s):  
Chiara de Jong ◽  
Rinaldo Kühne ◽  
Jochen Peter ◽  
Caroline L. van Straten ◽  
Alex Barco
Keyword(s):  

Author(s):  
Alistair M. C. Isaac ◽  
Will Bridewell

It is easy to see that social robots will need the ability to detect and evaluate deceptive speech; otherwise they will be vulnerable to manipulation by malevolent humans. More surprisingly, we argue that effective social robots must also be able to produce deceptive speech. Many forms of technically deceptive speech perform a positive pro-social function, and the social integration of artificial agents will be possible only if they participate in this market of constructive deceit. We demonstrate that a crucial condition for detecting and producing deceptive speech is possession of a theory of mind. Furthermore, strategic reasoning about deception requires identifying a type of goal distinguished by its priority over the norms of conversation, which we call an ulterior motive. We argue that this goal is the appropriate target for ethical evaluation, not the veridicality of speech per se. Consequently, deception-capable robots are compatible with the most prominent programs to ensure that robots behave ethically.


2008 ◽  
Vol 66 (5) ◽  
pp. 318-332 ◽  
Author(s):  
Jaka Sodnik ◽  
Christina Dicke ◽  
Sašo Tomažič ◽  
Mark Billinghurst
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document