Of Mental States and Machine Learning

Author(s):  
Andrew Best ◽  
Samantha F. Warta ◽  
Katelynn A. Kapalo ◽  
Stephen M. Fiore

Using research in social cognition as a foundation, we studied rapid versus reflective mental state attributions and the degree to which machine learning classifiers can be trained to make such judgments. We observed differences in response times between conditions, but did not find significant differences in the accuracy of mental state attributions. We additionally demonstrate how to train machine classifiers to identify mental states. We discuss advantages of using an interdisciplinary approach to understand and improve human-robot interaction and to further the development of social cognition in artificial intelligence.

2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


AI Magazine ◽  
2015 ◽  
Vol 36 (3) ◽  
pp. 107-112
Author(s):  
Adam B. Cohen ◽  
Sonia Chernova ◽  
James Giordano ◽  
Frank Guerin ◽  
Kris Hauser ◽  
...  

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


AI Magazine ◽  
2017 ◽  
Vol 37 (4) ◽  
pp. 83-88
Author(s):  
Christopher Amato ◽  
Ofra Amir ◽  
Joanna Bryson ◽  
Barbara Grosz ◽  
Bipin Indurkhya ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2016 Spring Symposium Series on Monday through Wednesday, March 21-23, 2016 at Stanford University. The titles of the seven symposia were (1) AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics; (2) Challenges and Opportunities in Multiagent Learning for the Real World (3) Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform; (4) Ethical and Moral Considerations in Non-Human Agents; (5) Intelligent Systems for Supporting Distributed Human Teamwork; (6) Observational Studies through Social Media and Other Human-Generated Content, and (7) Well-Being Computing: AI Meets Health and Happiness Science.


2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2019 ◽  
Vol 30 (1) ◽  
pp. 7-8
Author(s):  
Dora Maria Ballesteros

Artificial intelligence (AI) is an interdisciplinary subject in science and engineering that makes it possible for machines to learn from data. Artificial Intelligence applications include prediction, recommendation, classification and recognition, object detection, natural language processing, autonomous systems, among others. The topics of the articles in this special issue include deep learning applied to medicine [1, 3], support vector machine applied to ecosystems [2], human-robot interaction [4], clustering in the identification of anomalous patterns in communication networks [5], expert systems for the simulation of natural disaster scenarios [6], real-time algorithms of artificial intelligence [7] and big data analytics for natural disasters [8].


Author(s):  
Sophia von Salm-Hoogstraeten ◽  
Jochen Müsseler

Objective The present study investigated whether and how different human–robot interactions in a physically shared workspace influenced human stimulus–response (SR) relationships. Background Human work is increasingly performed in interaction with advanced robots. Since human–robot interaction often takes place in physical proximity, it is crucial to investigate the effects of the robot on human cognition. Method In two experiments, we compared conditions in which humans interacted with a robot that they either remotely controlled or monitored under otherwise comparable conditions in the same shared workspace. The cognitive extent to which the participants took the robot’s perspective served as a dependent variable and was evaluated with a SR compatibility task. Results The results showed pronounced compatibility effects from the robot’s perspective when participants had to take the perspective of the robot during the task, but significantly reduced compatibility effects when human and robot did not interact. In both experiments, compatibility effects from the robot’s perspective resulted in statistically significant differences in response times and in error rates between compatible and incompatible conditions. Conclusion We concluded that SR relationships from the perspective of the robot need to be considered when designing shared workspaces that require users to take the perspective of the robot. Application The results indicate changed compatibility relationships when users share their workplace with an interacting robot and therefore have to take its perspective from time to time. The perspective-dependent processing times are expected to be accompanied by corresponding error rates, which might affect—for instance—safety and efficiency in a production process.


2020 ◽  
Vol 43 (6) ◽  
pp. 373-384 ◽  
Author(s):  
Anna Henschel ◽  
Ruud Hortensius ◽  
Emily S. Cross

Sign in / Sign up

Export Citation Format

Share Document