Social Implications of AI

‘Social implications' generally refers to anything that affects an individual, a community, and wider society. The social implications of artificial intelligence (AI) is an immensely important field of study since AI technology will steadily continue to permeate other technologies and, inevitably, our society as a whole. Many of the social implications of this technological process are non-obvious and surprising. We should ask ourselves, What type of society do we want and what role will AI play to influence and shape lives? Will people simply become consumers served by intelligent systems that respond to our every whim? Are we reaching a tipping point between convenience and dependency? How will AI affect social issues relating to housing, finance, privacy, poverty, and so on? Do we want a society where machines are supplementing (or augmenting) humans or perhaps even substituting humans? It is important to be as clear as possible about likely social implications of AI if it truly helps benefit individuals and society.

Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


First Monday ◽  
2019 ◽  
Author(s):  
Katrin Etzrodt ◽  
Sven Engesser

Research on the social implications of technological developments is highly relevant. However, a broader comprehension of current innovations and their underlying theoretical frameworks is limited by their rapid evolution, as well as a plethora of different terms and definitions. The terminology used to describe current innovations varies significantly among disciplines, such as social sciences and computer sciences. This article contributes to systematic and cross-disciplinary research on current technological applications in everyday life by identifying the most relevant concepts (i.e., Ubiquitous Computing, Internet of Things, Smart Objects and Environments, Ambient Environments and Artificial Intelligence) and relating them to each other. Key questions, core aspects, similarities and differences are identified. Theoretically disentangling terminology results in four distinct analytical dimensions (connectivity, invisibility, awareness, and agency) that facilitate and address social implications. This article provides a basis for a deeper understanding, precise operationalisations, and an increased anticipation of impending developments.


AI & Society ◽  
2021 ◽  
Author(s):  
Nello Cristianini ◽  
Teresa Scantamburlo ◽  
James Ladyman

AbstractSocial machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behaviour. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social machines provide a new paradigm for the design of intelligent systems, marking a new phase in AI. After describing the characteristics of goal-driven social machines, we discuss the consequences of their adoption, for the practice of artificial intelligence as well as for its regulation.


2021 ◽  
pp. 11-25
Author(s):  
Daniel W. Tigard

AbstractTechnological innovations in healthcare, perhaps now more than ever, are posing decisive opportunities for improvements in diagnostics, treatment, and overall quality of life. The use of artificial intelligence and big data processing, in particular, stands to revolutionize healthcare systems as we once knew them. But what effect do these technologies have on human agency and moral responsibility in healthcare? How can patients, practitioners, and the general public best respond to potential obscurities in responsibility? In this paper, I investigate the social and ethical challenges arising with newfound medical technologies, specifically the ways in which artificially intelligent systems may be threatening moral responsibility in the delivery of healthcare. I argue that if our ability to locate responsibility becomes threatened, we are left with a difficult choice of trade-offs. In short, it might seem that we should exercise extreme caution or even restraint in our use of state-of-the-art systems, but thereby lose out on such benefits as improved quality of care. Alternatively, we could embrace novel healthcare technologies but in doing so we might need to loosen our commitment to locating moral responsibility when patients come to harm; for even if harms are fewer – say, as a result of data-driven diagnostics – it may be unclear who or what is responsible when things go wrong. What is clear, at least, is that the shift toward artificial intelligence and big data calls for significant revisions in expectations on how, if at all, we might locate notions of responsibility in emerging models of healthcare.


2018 ◽  
Author(s):  
Amelia Fiske ◽  
Peter Henningsen ◽  
Alena Buyx

BACKGROUND Research in embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services. With innovations ranging from ‘virtual psychotherapists’ to social robots in dementia care and autism disorder, to robots for sexual disorders, artificially intelligent virtual and robotic agents are increasingly taking on high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals. In order to enable responsible clinical implementation, ethical and social implications of the increasing use of embodied AI in mental health need to be identified and addressed. OBJECTIVE This paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis, it develops a set of preliminary recommendations on how to address ethical and social challenges in current and future applications of embodied AI. METHODS Based on a thematic literature search and established principles of medical ethics, an analysis of the ethical and social aspects of currently embodied AI applications was conducted across the fields of Psychiatry, Psychology, and Psychotherapy. To enable a comprehensive evaluation, the analysis was structured around the following three steps: assessment of potential benefits; analysis of overarching ethical issues and concerns; discussion of specific ethical and social issues of the interventions. RESULTS From an ethical perspective, important benefits of embodied AI applications in mental health include new modes of treatment, opportunities to engage hard-to-reach populations, better patient response, and freeing up time for physicians. Overarching ethical issues and concerns include: harm prevention and various questions of data ethics; a lack of guidance on development of AI applications, their clinical integration and training of health professionals; ‘gaps’ in ethical and regulatory frameworks; the potential for misuse including using the technologies to replace established services, thereby potentially exacerbating existing health inequalities. Specific challenges identified and discussed in the application of embodied AI include: matters of risk-assessment, referrals, and supervision; the need to respect and protect patient autonomy; the role of non-human therapy; transparency in the use of algorithms; and specific concerns regarding long-term effects of these applications on understandings of illness and the human condition. CONCLUSIONS We argue that embodied AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care. We conclude by indicating areas of future research and developing recommendations for high-priority areas in need of concrete ethical guidance.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259928
Author(s):  
Darius-Aurel Frank ◽  
Christian T. Elbæk ◽  
Caroline Kjær Børsting ◽  
Panagiotis Mitkidis ◽  
Tobias Otterbring ◽  
...  

The COVID-19 pandemic continues to impact people worldwide–steadily depleting scarce resources in healthcare. Medical Artificial Intelligence (AI) promises a much-needed relief but only if the technology gets adopted at scale. The present research investigates people’s intention to adopt medical AI as well as the drivers of this adoption in a representative study of two European countries (Denmark and France, N = 1068) during the initial phase of the COVID-19 pandemic. Results reveal AI aversion; only 1 of 10 individuals choose medical AI over human physicians in a hypothetical triage-phase of COVID-19 pre-hospital entrance. Key predictors of medical AI adoption are people’s trust in medical AI and, to a lesser extent, the trait of open-mindedness. More importantly, our results reveal that mistrust and perceived uniqueness neglect from human physicians, as well as a lack of social belonging significantly increase people’s medical AI adoption. These results suggest that for medical AI to be widely adopted, people may need to express less confidence in human physicians and to even feel disconnected from humanity. We discuss the social implications of these findings and propose that successful medical AI adoption policy should focus on trust building measures–without eroding trust in human physicians.


1973 ◽  
Vol 52 (3) ◽  
pp. 93
Author(s):  
J.D. Radford ◽  
D.B. Richardson

Author(s):  
M. G. Koliada ◽  
T. I. Bugayova

The article discusses the history of the development of the problem of using artificial intelligence systems in education and pedagogic. Two directions of its development are shown: “Computational Pedagogic” and “Educational Data Mining”, in which poorly studied aspects of the internal mechanisms of functioning of artificial intelligence systems in this field of activity are revealed. The main task is a problem of interface of a kernel of the system with blocks of pedagogical and thematic databases, as well as with the blocks of pedagogical diagnostics of a student and a teacher. The role of the pedagogical diagnosis as evident reflection of the complex influence of factors and reasons is shown. It provides the intelligent system with operative and reliable information on how various reasons intertwine in the interaction, which of them are dangerous at present, where recession of characteristics of efficiency is planned. All components of the teaching and educational system are subject to diagnosis; without it, it is impossible to own any pedagogical situation optimum. The means in obtaining information about students, as well as the “mechanisms” of work of intelligent systems based on innovative ideas of advanced pedagogical experience in diagnostics of the professionalism of a teacher, are considered. Ways of realization of skill of the teacher on the basis of the ideas developed by the American scientists are shown. Among them, the approaches of researchers D. Rajonz and U. Bronfenbrenner who put at the forefront the teacher’s attitude towards students, their views, intellectual and emotional characteristics are allocated. An assessment of the teacher’s work according to N. Flanders’s system, in the form of the so-called “The Interaction Analysis”, through the mechanism of fixing such elements as: the verbal behavior of the teacher, events at the lesson and their sequence is also proposed. A system for assessing the professionalism of a teacher according to B. O. Smith and M. O. Meux is examined — through the study of the logic of teaching, using logical operations at the lesson. Samples of forms of external communication of the intellectual system with the learning environment are given. It is indicated that the conclusion of the found productive solutions can have the most acceptable and comfortable form both for students and for the teacher in the form of three approaches. The first shows that artificial intelligence in this area can be represented in the form of robotized being in the shape of a person; the second indicates that it is enough to confine oneself only to specially organized input-output systems for targeted transmission of effective methodological recommendations and instructions to both students and teachers; the third demonstrates that life will force one to come up with completely new hybrid forms of interaction between both sides in the form of interactive educational environments, to some extent resembling the educational spaces of virtual reality.


2020 ◽  
Vol 3 (1) ◽  
pp. 1
Author(s):  
Shindy Lestari

Analysis of mathematics subject matter in elementary school is a very important field of study taught at every level of education. The 2013 curriculum separates the field of mathematics studies from themes so that this field of study is a subject that stands alone. Through mathematics subject matter taught in elementary school can train students to think critically, rationally, logically, innovatively so that they have competitiveness. As for the problems discussed from the subject matter in elementary school mathematics which is seen from the suitability of the teacher's book and the student's book, in this case it discusses: 1) the scope of mathematics material grade 3rd elementary school, 2) the characteristics of mathematics subject matter in elementary school, 3) the relevance in elementary school mathematics subject matter to the scientific structure, namely student character, HOTS, 4C skills, literacy numeracy, digital literacy, financial literacy and character education, 4) learning innovation based on integration-interconnection in accordance with the science of development and technology and the needs of the community in the Industrial Revolution Era 4.0.


Sign in / Sign up

Export Citation Format

Share Document