scholarly journals Intergroup preference, not dehumanization, explains social biases in emotion attribution

Cognition ◽  
2021 ◽  
Vol 216 ◽  
pp. 104865
Author(s):  
Florence E. Enock ◽  
Steven P. Tipper ◽  
Harriet Over
Author(s):  
Asuka Kaneko ◽  
Yui Asaoka ◽  
Young-A Lee ◽  
Yukiori Goto

Abstract Background Decision-making and judgments in our social activities often erroneous and irrational, known as social biases. However, cognitive and affective processes that produce such biases remain largely unknown. In this study, we investigated associations between social schemas, such as social judgment and conformity, entailing social biases and psychological measurements relevant to cognitive and affective functions. Method Forty-two healthy adult subjects were recruited in this study. A psychological test and a questionnaire were administered to assess biased social judgements by superficial attributes and social conformity by adherence to social norms, respectively, along with additional questionnaires and psychological tests for cognitive and affective measurements, including negative affects, autistic traits, and Theory of Mind (ToM). Associations of social judgment and conformity with cognitive and affective functions were examined multiple regression analysis and structural equation modeling. Results Anxiety and the cognitive realm of ToM were mutually associated with both social judgments and conformity, although social judgements and conformity were still independent processes with each other. Social judgements were also associated with autistic traits and the affective realm of ToM, whereas social conformity was associated with negative affects other than anxiety and intuitive decision-making style. Conclusions These results suggest that ToM and negative affects may play important roles in social judgements and conformity, and social biases connoted in these social schemas.


Author(s):  
Eva Wiese ◽  
Patrick P. Weis ◽  
Yochanan Bigman ◽  
Kyra Kapsaskis ◽  
Kurt Gray

AbstractRobots are becoming more available for workplace collaboration, but many questions remain. Are people actually willing to assign collaborative tasks to robots? And if so, exactly which tasks will they assign to what kinds of robots? Here we leverage psychological theories on person-job fit and mind perception to investigate task assignment in human–robot collaborative work. We propose that people will assign robots to jobs based on their “perceived mind,” and also that people will show predictable social biases in their collaboration decisions. In this study, participants performed an arithmetic (i.e., calculating differences) and a social (i.e., judging emotional states) task, either alone or by collaborating with one of two robots: an emotionally capable robot or an emotionally incapable robot. Decisions to collaborate (i.e., to assign the robots to generate the answer) rates were high across all trials, especially for tasks that participants found challenging (i.e., the arithmetic task). Collaboration was predicted by perceived robot-task fit, such that the emotional robot was assigned the social task. Interestingly, the arithmetic task was assigned more to the emotionally incapable robot, despite the emotionally capable robot being equally capable of computation. This is consistent with social biases (e.g., gender bias) in mind perception and person-job fit. The theoretical and practical implications of this work for HRI are being discussed.


2017 ◽  
Vol 114 (37) ◽  
pp. 9848-9853 ◽  
Author(s):  
Bruno Abrahao ◽  
Paolo Parigi ◽  
Alok Gupta ◽  
Karen S. Cook

To provide social exchange on a global level, sharing-economy companies leverage interpersonal trust between their members on a scale unimaginable even a few years ago. A challenge to this mission is the presence of social biases among a large heterogeneous and independent population of users, a factor that hinders the growth of these services. We investigate whether and to what extent a sharing-economy platform can design artificially engineered features, such as reputation systems, to override people’s natural tendency to base judgments of trustworthiness on social biases. We focus on the common tendency to trust others who are similar (i.e., homophily) as a source of bias. We test this argument through an online experiment with 8,906 users of Airbnb, a leading hospitality company in the sharing economy. The experiment is based on an interpersonal investment game, in which we vary the characteristics of recipients to study trust through the interplay between homophily and reputation. Our findings show that reputation systems can significantly increase the trust between dissimilar users and that risk aversion has an inverse relationship with trust given high reputation. We also present evidence that our experimental findings are confirmed by analyses of 1 million actual hospitality interactions among users of Airbnb.


2018 ◽  
Vol 45 (8) ◽  
pp. 1232-1251 ◽  
Author(s):  
Jordan R. Axt ◽  
Grace Casola ◽  
Brian A. Nosek

Social judgment is shaped by multiple biases operating simultaneously, but most bias-reduction interventions target only a single social category. In seven preregistered studies (total N > 7,000), we investigated whether asking participants to avoid one social bias affected that and other social biases. Participants selected honor society applicants based on academic credentials. Applicants also differed on social categories irrelevant for selection: attractiveness and ingroup status. Participants asked to avoid potential bias in one social category showed small but reliable reductions in bias for that category ( r = .095), but showed near-zero bias reduction on the unmentioned social category ( r = .006). Asking participants to avoid many possible social biases or alerting them to bias without specifically identifying a category did not consistently reduce bias. The effectiveness of interventions for reducing social biases may be highly specific, perhaps even contingent on explicitly and narrowly identifying the potential source of bias.


2018 ◽  
Vol 270 ◽  
pp. 554-559 ◽  
Author(s):  
Verónica Romero-Ferreiro ◽  
Luis Aguado ◽  
Iosune Torío ◽  
Eva M. Sánchez-Morla ◽  
Montserrat Caballero-González ◽  
...  

2018 ◽  
Author(s):  
Jordan Axt ◽  
Grace Casola ◽  
Brian A. Nosek

Social judgment is shaped by multiple biases operating simultaneously, but most bias-reduction interventions target only a single social category. In seven pre-registered studies (Total N > 7,000), we investigated whether asking participants to avoid one social bias impacted that and other social biases. Participants selected honor society applicants based on academic credentials. Applicants also differed on social categories irrelevant for selection: attractiveness and ingroup status. Participants asked to avoid potential bias in one social category showed small but reliable reductions in bias for that category (r = .095), but showed near zero bias reduction on the unmentioned social category (r = .006). Asking participants to avoid many possible social biases or alerting them to bias without specifically identifying a category did not consistently reduce bias. The effectiveness of interventions for reducing social biases may be highly specific, perhaps even contingent on explicitly and narrowly identifying the potential source of bias.


2010 ◽  
Vol 39 (4) ◽  
pp. 437-456 ◽  
Author(s):  
Molly Babel

AbstractRecent research has been concerned with whether speech accommodation is an automatic process or determined by social factors (e.g. Trudgill 2008). This paper investigates phonetic accommodation in New Zealand English when speakers of NZE are responding to an Australian talker in a speech production task. NZ participants were randomly assigned to either a Positive or Negative group, where they were either flattered or insulted by the Australian. Overall, the NZE speakers accommodated to the speech of the AuE speaker. The flattery/insult manipulation did not influence degree of accommodation, but accommodation was predicted by participants' scores on an Implicit Association Task that measured Australia and New Zealand biases. Participants who scored with a pro-Australia bias were more likely to accommodate to the speech of the AuE speaker. Social biases about how a participant feels about a speaker predicted the extent of accommodation. These biases are, crucially, simultaneously automatic and social. (Speech accommodation, phonetic convergence, New Zealand English, dialect contact)*


PLoS ONE ◽  
2016 ◽  
Vol 11 (11) ◽  
pp. e0165546 ◽  
Author(s):  
Junghee Lee ◽  
William P. Horan ◽  
Jonathan K. Wynn ◽  
Michael F. Green

2019 ◽  
Vol 34 (1) ◽  
pp. 81-103 ◽  
Author(s):  
Rina M. Hirsch

ABSTRACT Due to limitations in IT expertise, auditors frequently rely upon IT specialists during audit engagements. Does social similarity between the auditor and an IT specialist induce social biases that affect the auditor's reliance on the specialist? Using an experiment with 60 auditors, I examine how financial auditors' reliance on IT specialists is affected by two dimensions of social similarity: the IT specialist's spatial distance (in-house office location versus sourcing from another office) and domain knowledge distinctiveness (distinct versus overlapping) relative to financial auditors. My findings provide evidence of a possible boundary condition to the widely accepted social identity theory by documenting the interaction of two dimensions of social similarity on auditor behavior. Specifically, when IT specialists possess distinct (overlapping) domain knowledge, auditors place greater (similar) reliance on out-of-office specialists relative to in-house specialists.


Sign in / Sign up

Export Citation Format

Share Document