scholarly journals Rethinking agency in language and society

2021 ◽  
Vol 2021 (267-268) ◽  
pp. 271-275
Author(s):  
Lionel Wee

Abstract The notion of agency is typically understood as stemming from the goals and desires of human actors. This is an assumption that has been taken on board in the study of language in society as well. In this article, I point out the problems with this assumption as well as another: the tendency to downplay if not dismiss the roles of non-human entities. I argue that these points about agency carry serious implications for the study of language in society. It is undeniable that various technological advancements ranging from relatively simple computer programs to highly developed artificial intelligence (AI) are increasingly involved in our use of language for communication. These are cases where the human element is increasingly distant from the use of language for communicative purposes. They pose conceptual challenges for the study of language in society and require a willingness to rethink the nature of agency.

2021 ◽  
Vol 5 (9) ◽  
pp. RV1-RV5
Author(s):  
Sahrish Tariq ◽  
Nidhi Gupta ◽  
Preety Gupta ◽  
Aditi Sharma

The educational needs must drive the development of the appropriate technology”. They should not be viewed as toys for enthusiasts. Nevertheless, the human element must never be dismissed. Scientific research will continue to offer exciting technologies and effective treatments. For the profession and the patients, it serves to benefit fully from modern science, new knowledge and technologies must be incorporated into the mainstream of dental education. The technologies of modern science have astonished and intrigued our imagination. Correct diagnosis is the key to a successful clinical practice. In this regard, adequately trained neural networks can be a boon to diagnosticians, especially in conditions having multifactorial etiology.


2020 ◽  
pp. 447-456
Author(s):  
Г. В. Луцька

The article considers the problem of application of artificial intelligence in the law of Ukraine in general and in the notarial and civil process in particular. The legal consequences of the legal regime of temporary occupation of some territories of Ukraine are indicated and the ways to eliminate obstacles in the protection and defense of the rights of citizens of Ukraine in these territories are determined. The legal construction of «artificial intelligence» is studied and its types are offered. The conclusion about the expediency of using intelligent computer programs, intelligent information technologies as types of artificial intelligence in notarial and executive processes is substantiated. It is proposed to consider the use of artificial intelligence in notarial and civil proceedings for citizens of Ukraine living in the Autonomous Republic of Crimea or in the occupied territories of Donetsk and Luhansk regions, within the limits, in the manner and in the manner prescribed by law of Ukraine. It is proved that the introduction of artificial intelligence through the mechanism of protection and defense of human and civil rights and freedoms in the civil process must be adapted to social relations that arise and exist, not violate the constitutional rights and freedoms of man and citizen in Ukraine and have a legal basis. Based on the scientific and practical analysis of the Civil Procedure Code of Ukraine, it is proposed for citizens of Ukraine living in the Autonomous Republic of Crimea or in the occupied territories of Donetsk and Luhansk regions to establish that lawsuits, separate and injunctive proceedings are entirely online. The procedure (procedure) and features of such proceedings with the use of various types of artificial intelligence (such as chatbots and other information intelligence technologies) should be defined in the Civil Procedure Code of Ukraine. It is noted that the introduction of the above mechanism to protect and defend the rights of citizens living in the Autonomous Republic of Crimea or in the occupied territories of Donetsk and Luhansk regions through intelligent computer programs will require proper maintenance and support of such programs to prevent leakage of information, leakage of personal data, etc. The conclusion is substantiated that e-litigation and remote notarial proceedings will increase the effectiveness of notarial and judicial forms of protection and protection of rights and make these state forms of protection more flexible, able to anticipate the peculiarities of procedural actions involving residents of the temporarily occupied territories.


Author(s):  
Ekaterina Jussupow ◽  
Kai Spohrer ◽  
Armin Heinzl ◽  
Joshua Gawlitza

Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.


Author(s):  
Peter R Slowinski

The core of artificial intelligence (AI) applications is software of one sort or another. But while available data and computing power are important for the recent quantum leap in AI, there would not be any AI without computer programs or software. Therefore, the rise in importance of AI forces us to take—once again—a closer look at software protection through intellectual property (IP) rights, but it also offers us a chance to rethink this protection, and while perhaps not undoing the mistakes of the past, at least to adapt the protection so as not to increase the dysfunctionality that we have come to see in this area of law in recent decades. To be able to establish the best possible way to protect—or not to protect—the software in AI applications, this chapter starts with a short technical description of what AI is, with readers referred to other chapters in this book for a deeper analysis. It continues by identifying those parts of AI applications that constitute software to which legal software protection regimes may be applicable, before outlining those protection regimes, namely copyright and patents. The core part of the chapter analyses potential issues regarding software protection with respect to AI using specific examples from the fields of evolutionary algorithms and of machine learning. Finally, the chapter draws some conclusions regarding the future development of IP regimes with respect to AI.


2020 ◽  
Vol 2 (3) ◽  
Author(s):  
Simon Lindgren ◽  
Jonny Holmström

In this article, we discuss and outline a research agenda for social science research on artificial intelligence. We present four overlapping building blocks that we see as keys for developing a perspective on AI able to unpack the rich complexities of sociotechnical settings. First, the interaction between humans and machines must be studied in its broader societal context. Second, technological and human actors must be seen as social actors on equal terms. Third, we must consider the broader discursive settings in which AI is socially constructed as a phenomenon with related hopes and fears. Fourth, we argue that constant and critical reflection is needed over how AI, algorithms and datafication affect social science research objects and methods. This article serves as the introduction to this JDSR special issue about social science perspectives on AI.


AI Magazine ◽  
2020 ◽  
Vol 41 (2) ◽  
pp. 93-95
Author(s):  
Lara Streiff

A 100-year-long study of artificial intelligence — known as the AI100 — is now working toward its second report to reflect on, and predict, the societal impacts of AI technologies. When the project was launched in 2014, an interdisciplinary group of experts gathered to assess the effects AI has on its users and their communities, as well as the technology itself. The first report titled Artificial Intelligence and Life in 2030 is a reference for those in government and industry, as well as for the general public, on how to interact with AI. It covers eight sectors spanning from transportation and healthcare to entertainment. As we enter the next decade, a second report looms on the horizon. This follow-up report presents an opportunity to reflect on the booming changes to the industry and resultant impacts on society since the first study findings were released. While maintaining a level of continuity, this next report is expected to aim a broader lens on the influences of these technologies worldwide. It will also explore human-centric applications in greater depth, to touch upon the personal connections between individuals and AI technologies.The human element is increasingly important as interactions with artificial intelligence expand through applications like autonomous vehicles, increasingly capable search engines, and electronic personal assistants. Debating ethics, purpose, intention, and deployment of these technologies will remain an ongoing challenge for this study. To reflect these realities, the committee is expected to include scholars from disciplines such as philosophy, anthropology, sociology, and critical studies in addition to AI scientists and engineers.


2020 ◽  
Vol 26 (6) ◽  
pp. 3121-3141
Author(s):  
Sebastian Köhler

AbstractAdvances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own (in some sense). These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the question of responsibility. Instead, or so argues Nyholm, because supervised agency is a form of collaborative agency—of acting together—the right place to look is the theory of collaborative responsibility—responsibility in cases of acting together. This paper concedes that current AI will possess supervised agency, but argues that it is nevertheless wrong to think of the relevant human-AI interactions as a form of collaborative agency and, hence, that responsibility in cases of collaborative agency is not the right place to look for the responsibility-grounding relation in human-AI interactions. It also suggests that the right place to look for this responsibility-grounding relation in human-AI interactions is the use of certain sorts of agents as instruments.


2021 ◽  
Author(s):  
Andrew Creegan ◽  
Michael Roberts

Abstract The usage of Artificial Intelligence (AI) in the arena of drilling optimization is a rapidly evolving endeavor and is becoming increasingly prevalent. In many applications the goal is process automation and optimization with the intent to reduce cost, improve yield/outcome and address risk. Real-world experience, however, has taught us that the correct application, configuration, and realtime management of an AI system is equally as important as the underlying algorithms. This paper poses that the implementation of an automated AI drilling system must consider the human element of acceptance in order to succeed. Proper onboarding and user acceptance is requisite to proper system configuration and performance. This paper sets forth guidelines that can be considered standard for initiating an AI drilling program.


Sign in / Sign up

Export Citation Format

Share Document