The Future Impact of Artificial Intelligence on Humans and Human Rights

2019 ◽  
Vol 33 (02) ◽  
pp. 141-158 ◽  
Author(s):  
Steven Livingston ◽  
Mathias Risse

AbstractWhat are the implications of artificial intelligence (AI) on human rights in the next three decades? Precise answers to this question are made difficult by the rapid rate of innovation in AI research and by the effects of human practices on the adaption of new technologies. Precise answers are also challenged by imprecise usages of the term “AI.” There are several types of research that all fall under this general term. We begin by clarifying what we mean by AI. Most of our attention is then focused on the implications of artificial general intelligence (AGI), which entail that an algorithm or group of algorithms will achieve something like superintelligence. While acknowledging that the feasibility of superintelligence is contested, we consider the moral and ethical implications of such a potential development. What do machines owe humans and what do humans owe superintelligent machines?

2020 ◽  
Vol 6 (2) ◽  
pp. 135-161
Author(s):  
Diego Alejandro Borbón Rodríguez ◽  
◽  
Luisa Fernanda Borbón Rodríguez ◽  
Jeniffer Laverde Pinzón

Advances in neurotechnologies and artificial intelligence have led to an innovative proposal to establish ethical and legal limits to the development of technologies: Human NeuroRights. In this sense, the article addresses, first, some advances in neurotechnologies and artificial intelligence, as well as their ethical implications. Second, the state of the art on the innovative proposal of Human NeuroRights is exposed, specifically, the proposal of the NeuroRights Initiative of Columbia University. Third, the proposal for the rights of free will and equitable access to augmentation technologies is critically analyzed to conclude that, although it is necessary to propose new regulations for neurotechnologies and artificial intelligence, the debate is still very premature as if to try to incorporate a new category of human rights that may be inconvenient or unnecessary. Finally, some considerations on how to regulate new technologies are explained and the conclusions of the work are presented.


2021 ◽  
pp. 1-27
Author(s):  
Tiberiu Dragu ◽  
Yonatan Lupu

Abstract How will advances in digital technology affect the future of human rights and authoritarian rule? Media figures, public intellectuals, and scholars have debated this relationship for decades, with some arguing that new technologies facilitate mobilization against the state and others countering that the same technologies allow authoritarians to strengthen their grip on power. We address this issue by analyzing the first game-theoretic model that accounts for the dual effects of technology within the strategic context of preventive repression. Our game-theoretical analysis suggests that technological developments may not be detrimental to authoritarian control and may, in fact, strengthen authoritarian control by facilitating a wide range of human rights abuses. We show that technological innovation leads to greater levels of abuses to prevent opposition groups from mobilizing and increases the likelihood that authoritarians will succeed in preventing such mobilization. These results have broad implications for the human rights regime, democratization efforts, and the interpretation of recent declines in violent human rights abuses.


2021 ◽  
Author(s):  
Armstrong Lee Agbaji

Abstract Historically, the oil and gas industry has been slow and extremely cautious to adopt emerging technologies. But in the Age of Artificial Intelligence (AI), the industry has broken from tradition. It has not only embraced AI; it is leading the pack. AI has not only changed what it now means to work in the oil industry, it has changed how companies create, capture, and deliver value. Thanks, or no thanks to automation, traditional oil industry skills and talents are now being threatened, and in most cases, rendered obsolete. Oil and gas industry day-to-day work is progressively gravitating towards software and algorithms, and today’s workers are resigning themselves to the fact that computers and robots will one day "take over" and do much of their work. The adoption of AI and how it might affect career prospects is currently causing a lot of anxiety among industry professionals. This paper details how artificial intelligence, automation, and robotics has redefined what it now means to work in the oil industry, as well as the new challenges and responsibilities that the AI revolution presents. It takes a deep-dive into human-robot interaction, and underscores what AI can, and cannot do. It also identifies several traditional oilfield positions that have become endangered by automation, addresses the premonitions of professionals in these endangered roles, and lays out a roadmap on how to survive and thrive in a digitally transformed world. The future of work is evolving, and new technologies are changing how talent is acquired, developed, and retained. That robots will someday "take our jobs" is not an impossible possibility. It is more of a reality than an exaggeration. Automation in the oil industry has achieved outcomes that go beyond human capabilities. In fact, the odds are overwhelming that AI that functions at a comparable level to humans will soon become ubiquitous in the industry. The big question is: How long will it take? The oil industry of the future will not need large office complexes or a large workforce. Most of the work will be automated. Drilling rigs, production platforms, refineries, and petrochemical plants will not go away, but how work is done at these locations will be totally different. While the industry will never entirely lose its human touch, AI will be the foundation of the workforce of the future. How we react to the AI revolution today will shape the industry for generations to come. What should we do when AI changes our job functions and workforce? Should we be training AI, or should we be training humans?


2020 ◽  
Vol 5 (3-4) ◽  
pp. 129-133
Author(s):  
Benjamin Shestakofsky

Some researchers have warned that advances in artificial intelligence will increasingly allow employers to substitute human workers with software and robotic systems, heralding an impending wave of technological unemployment. By attending to the particular contexts in which new technologies are developed and implemented, others have revealed that there is nothing inevitable about the future of work, and that there is instead the potential for a diversity of models for organizing the relationship between work and artificial intelligence. Although these social constructivist approaches allow researchers to identify sources of contingency in technological outcomes, they are less useful in explaining how aims and outcomes can converge across diverse settings. In this essay, I make the case that researchers of work and technology should endeavor to link the outcomes of artificial intelligence systems not only to their immediate environments but also to less visible—but nevertheless deeply influential—structural features of societies. I demonstrate the utility of this approach by elaborating on how finance capital structures technology choices in the workplace. I argue that investigating how the structure of ownership influences a firm’s technology choices can open our eyes to alternative models and politics of technological development, improving our understanding of how to make innovation work for everyone instead of allowing the benefits generated by technological change to be hoarded by a select few.


AJIL Unbound ◽  
2018 ◽  
Vol 112 ◽  
pp. 324-328 ◽  
Author(s):  
Alexandra Huneeus

The seventieth anniversary of the Universal Declaration of Human Rights (UDHR) comes at a time of more contestation than usual over the future of human rights. A sense of urgency animates debates over whether the institutions and ideas of human rights can, or should, survive current geopolitical changes. This symposium, by contrast, shifts the lens to a more slow-moving but equally profound challenge to human rights law: how technology and its impacts on our social and physical environments are reshaping the debate on what it means to be human. Can the UDHR be recast for a time in which new technologies are continually altering how humans interact, and the legal status of robots, rivers, and apes alike are at times argued in the language of rights?


2019 ◽  
Vol 76 ◽  
pp. 283-296
Author(s):  
Ryszard Piotrowski

The rapid development of information and communication technology has made it imperative that new human rights be spelled out, to cope with an array of expected threats associated with this process. With artificial intelligence being increasingly put to practical uses, the prospect arises of Man’s becoming more and more AI-dependant in multiple walks of life. This necessitates that a constitutional and international dimension be imparted to a right that stipulates that key state-level decisions impacting human condition, life and freedom must be made by humans, not automated systems or other AI contraptions. But if artificial intelligence were to make decisions, then it should be properly equipped with value-based criteria. The culture of abdication of privacy protection may breed consent to the creation and practical use of technologies capable to penetrate an individual consciousness without his or her consent. Evidence based on such thought interference must be barred from court proceedings. Everyone’s right to intellectual identity and integrity, the right to one’s thoughts being free from technological interference, is as essential for the survival of the democratic system as the right to privacy – and it may well prove equally endangered.


2020 ◽  
Vol 1 (12) ◽  
pp. 36-42
Author(s):  
M. V. Shmeleva

The paper is devoted to the issues of digitalization in state and municipal procurement. Every year the field of state and municipal procurement is becoming more and more processible, new technologies and solutions are being introduced, procurement processes are becoming more and more automated. Rapid changes in the field under consideration force participants of procurement to intensively master such technologies as chat bots, artificial intelligence, blockchain, etc. As a result of the research, the author has come to the conclusion that the existing regulation of state and municipal procurement is already sufficient for smart contracts to be successfully integrated into the Russian legal system.


2021 ◽  
Vol 15 (1) ◽  
pp. 25-52
Author(s):  
Kelly Blount

The justice system is increasingly reliant on new technologies such as artificial intelligence (AI). In the field of criminal law this also extends to the methods utilized by police for preventing crime. Though policing is not explicitly covered by Article 6 of the European Convention of Human Rights, this article will demonstrate that there can be adverse effects of policing on fair trial rights and make the analogy to criminal investigations as a recognized pre-trial process. Specifically, it will argue that policing that relies on AI to predict crime has direct effects on fair trial processes such as the equality of arms, the presumption of innocence, and the right to confront the evidence produced against a defendant. It will conclude by challenging the notion that AI is always an appropriate tool for legal processes.


Sign in / Sign up

Export Citation Format

Share Document