scholarly journals Algorithms as discrimination detectors

2020 ◽  
Vol 117 (48) ◽  
pp. 30096-30100 ◽  
Author(s):  
Jon Kleinberg ◽  
Jens Ludwig ◽  
Sendhil Mullainathan ◽  
Cass R. Sunstein

Preventing discrimination requires that we have means of detecting it, and this can be enormously difficult when human beings are making the underlying decisions. As applied today, algorithms can increase the risk of discrimination. But as we argue here, algorithms by their nature require a far greater level of specificity than is usually possible with human decision making, and this specificity makes it possible to probe aspects of the decision in additional ways. With the right changes to legal and regulatory systems, algorithms can thus potentially make it easier to detect—and hence to help prevent—discrimination.

2022 ◽  
pp. 231-246
Author(s):  
Swati Bansal ◽  
Monica Agarwal ◽  
Deepak Bansal ◽  
Santhi Narayanan

Artificial intelligence is already here in all facets of work life. Its integration into human resources is a necessary process which has far-reaching benefits. It may have its challenges, but to survive in the current Industry 4.0 environment and prepare for the future Industry 5.0, organisations must penetrate AI into their HR systems. AI can benefit all the functions of HR, starting right from talent acquisition to onboarding and till off-boarding. The importance further increases, keeping in mind the needs and career aspirations of Generation Y and Z entering the workforce. Though employees have apprehensions of privacy and loss of jobs if implemented effectively, AI is the present and future. AI will not make people lose jobs; instead, it would require the HR people to upgrade their skills and spend their time in more strategic roles. In the end, it is the HR who will make the final decisions from the information that they get from the AI tools. A proper mix of human decision-making skills and AI would give organisations the right direction to move forward.


2018 ◽  
Vol 1 (1) ◽  
Author(s):  
Asbjørn Sonne Nørgaard

Both Herbert A. Simon and Anthony Downs borrowed heavily from psychology to develop more accurate theories of Administrative Behavior outside and Inside Bureaucracy: Simon, to explicate the cognitive shortcomings in human rationality and its implications; and Downs, to argue that public officials, like other human beings, vary in their psychological needs and motivations and, therefore, behave differently in similar situations. I examine how recent psychological research adds important nuances to the psychology of human decision-making and behavior and points in somewhat other directions than those taken by Simon and Downs. Cue-taking, fast and intuitive thinking, and emotions play a larger role in human judgment and decision-making than what Simon suggested with his notion of bounded rationality. Personality trait theory provides a more general and solid underpinning for understanding individual differences in behavior, both inside and outside bureaucracy, than the 'types of officials' that Downs discussed. I present an agenda for a behavioral public administration that takes key issues in cognitive psychology and personality psychology into account.


Author(s):  
Adrian F. Loera-Castro ◽  
Jaime Sanchez ◽  
Jorge Restrepo ◽  
Angel Fabián Campoya Morales ◽  
Julian I. Aguilar-Duque

The latter includes customizing the user interface, as well as the way the system retrieves and processes cases afterward. The resulting cases may be shown to the user in different ways, and/or the retrieved cases may be adapted. This chapter is about an intelligent model for decision making based on case-based reasoning to solve the existing problem in the planning of distribution in the supply chain between a distribution center and a chain of supermarkets. First, the authors mentioned the need for intelligent systems in the decision-making processes, where they are necessary due to the limitations associated with conventional human decision-making processes. Among them, human experience is very scarce, and humans get tired of the burden of physical or mental work. In addition, human beings forget the crucial details of a problem, and many of the times are inconsistent in their daily decisions.


Author(s):  
Joseph Kizza ◽  
Florence Migga Kizza

We closed the last chapter on a note about building a good ethical framework and its central role in securing the information infrastructure. A good ethical framework is essential for good decision making. Decision making is a staple for human beings. As we get more and more dependent on computer technology, we are slowly delegating the right to make rational decisions and the right to reason. In so doing, we are abdicating our responsibilities as human beings. Human autonomy, the human ability to make rational decisions, is the essence of life. If you cannot make personal decisions, based on the principle of duty of care, for your day-to-day living, you may as well be called the living dead. We are focusing on decision making in this chapter and how character education, that is ethics education, and codes of conduct help in creating an ethical framework essential for good decision making.


2021 ◽  
pp. 1-23
Author(s):  
Lisa Herzog

Abstract More and more decisions in our societies are made by algorithms. What are such decisions like, and how do they compare to human decision-making? I contrast central features of algorithmic decision-making with three key elements—plurality, natality, and judgment—of Hannah Arendt's political thought. In “Arendtian practices,” human beings come together as equals, exchange arguments, and make joint decisions, sometimes bringing something new into the world. With algorithmic decision-making taking over more and more areas of life, opportunities for “Arendtian practices” are under threat. Moreover, there is the danger that algorithms are tasked with decisions for which they are ill-suited. Analyzing the contrast with Arendt's thinking can be a starting point for delineating realms in which algorithmic decision-making should or should not be welcomed.


2019 ◽  
Author(s):  
Lei Zhang ◽  
Jan P. Gläscher

AbstractHumans learn from their own trial-and-error experience and from observing others. However, it remains unanswered how brain circuits compute expected values when direct learning and social learning coexist in an uncertain environment. Using a multi-player reward learning paradigm with 185 participants (39 being scanned) in real-time, we observed that individuals succumbed to the group when confronted with dissenting information, but increased their confidence when observing confirming information. Leveraging computational modeling and fMRI we tracked direct valuation through experience and vicarious valuation through observation, and their dissociable, but interacting neural representations in the ventromedial prefrontal cortex and the anterior cingulate cortex, respectively. Their functional coupling with the right temporoparietal junction representing instantaneous social information instantiated a hitherto uncharacterized social prediction error, rather than a reward prediction error, in the putamen. These findings suggest that an integrated network involving the brain’s reward hub and social hub supports social influence in human decision-making.


Author(s):  
Adrian F. Loera-Castro ◽  
Jaime Sanchez ◽  
Jorge Restrepo ◽  
Angel Fabián Campoya Morales ◽  
Julian I. Aguilar-Duque

The latter includes customizing the user interface, as well as the way the system retrieves and processes cases afterward. The resulting cases may be shown to the user in different ways, and/or the retrieved cases may be adapted. This chapter is about an intelligent model for decision making based on case-based reasoning to solve the existing problem in the planning of distribution in the supply chain between a distribution center and a chain of supermarkets. First, the authors mentioned the need for intelligent systems in the decision-making processes, where they are necessary due to the limitations associated with conventional human decision-making processes. Among them, human experience is very scarce, and humans get tired of the burden of physical or mental work. In addition, human beings forget the crucial details of a problem, and many of the times are inconsistent in their daily decisions.


2019 ◽  
Vol 8 (10) ◽  
pp. 281 ◽  
Author(s):  
Emily Keddell

Algorithmic tools are increasingly used in child protection decision-making. Fairness considerations of algorithmic tools usually focus on statistical fairness, but there are broader justice implications relating to the data used to construct source databases, and how algorithms are incorporated into complex sociotechnical decision-making contexts. This article explores how data that inform child protection algorithms are produced and relates this production to both traditional notions of statistical fairness and broader justice concepts. Predictive tools have a number of challenging problems in the child protection context, as the data that predictive tools draw on do not represent child abuse incidence across the population and child abuse itself is difficult to define, making key decisions that become data variable and subjective. Algorithms using these data have distorted feedback loops and can contain inequalities and biases. The challenge to justice concepts is that individual and group rights to non-discrimination become threatened as the algorithm itself becomes skewed, leading to inaccurate risk predictions drawing on spurious correlations. The right to be treated as an individual is threatened when statistical risk is based on a group categorisation, and the rights of families to understand and participate in the decisions made about them is difficult when they have not consented to data linkage, and the function of the algorithm is obscured by its complexity. The use of uninterpretable algorithmic tools may create ‘moral crumple zones’, where practitioners are held responsible for decisions even when they are partially determined by an algorithm. Many of these criticisms can also be levelled at human decision makers in the child protection system, but the reification of these processes within algorithms render their articulation even more difficult, and can diminish other important relational and ethical aims of social work practice.


2020 ◽  
Vol 6 (34) ◽  
pp. eabb4159
Author(s):  
Lei Zhang ◽  
Jan Gläscher

Humans learn from their own trial-and-error experience and observing others. However, it remains unknown how brain circuits compute expected values when direct learning and social learning coexist in uncertain environments. Using a multiplayer reward learning paradigm with 185 participants (39 being scanned) in real time, we observed that individuals succumbed to the group when confronted with dissenting information but observing confirming information increased their confidence. Leveraging computational modeling and functional magnetic resonance imaging, we tracked direct valuation through experience and vicarious valuation through observation and their dissociable, but interacting neural representations in the ventromedial prefrontal cortex and the anterior cingulate cortex, respectively. Their functional coupling with the right temporoparietal junction representing instantaneous social information instantiated a hitherto uncharacterized social prediction error, rather than a reward prediction error, in the putamen. These findings suggest that an integrated network involving the brain’s reward hub and social hub supports social influence in human decision-making.


2018 ◽  
Vol 10 ◽  
pp. 113-174 ◽  
Author(s):  
Jon Kleinberg ◽  
Jens Ludwig ◽  
Sendhil Mullainathan ◽  
Cass R Sunstein

Abstract The law forbids discrimination. But the ambiguity of human decision-making often makes it hard for the legal system to know whether anyone has discriminated. To understand how algorithms affect discrimination, we must understand how they affect the detection of discrimination. With the appropriate requirements in place, algorithms create the potential for new forms of transparency and hence opportunities to detect discrimination that are otherwise unavailable. The specificity of algorithms also makes transparent tradeoffs among competing values. This implies algorithms are not only a threat to be regulated; with the right safeguards, they can be a potential positive force for equity.


Sign in / Sign up

Export Citation Format

Share Document