scholarly journals Conservatism predicts aversion to consequential Artificial Intelligence

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261467
Author(s):  
Noah Castelo ◽  
Adrian F. Ward

Artificial intelligence (AI) has the potential to revolutionize society by automating tasks as diverse as driving cars, diagnosing diseases, and providing legal advice. The degree to which AI can improve outcomes in these and other domains depends on how comfortable people are trusting AI for these tasks, which in turn depends on lay perceptions of AI. The present research examines how these critical lay perceptions may vary as a function of conservatism. Using five survey experiments, we find that political conservatism is associated with low comfort with and trust in AI—i.e., with AI aversion. This relationship between conservatism and AI aversion is explained by the link between conservatism and risk perception; more conservative individuals perceive AI as being riskier and are therefore more averse to its adoption. Finally, we test whether a moral reframing intervention can reduce AI aversion among conservatives.

Author(s):  
Igor I. Kartashov ◽  
Ivan I. Kartashov

For millennia, mankind has dreamed of creating an artificial creature capable of thinking and acting “like human beings”. These dreams are gradually starting to come true. The trends in the development of modern so-ciety, taking into account the increasing level of its informatization, require the use of new technologies for information processing and assistance in de-cision-making. Expanding the boundaries of the use of artificial intelligence requires not only the establishment of ethical restrictions, but also gives rise to the need to promptly resolve legal problems, including criminal and proce-dural ones. This is primarily due to the emergence and spread of legal expert systems that predict the decision on a particular case, based on a variety of parameters. Based on a comprehensive study, we formulate a definition of artificial intelligence suitable for use in law. It is proposed to understand artificial intelligence as systems capable of interpreting the received data, making optimal decisions on their basis using self-learning (adaptation). The main directions of using artificial intelligence in criminal proceedings are: search and generalization of judicial practice; legal advice; preparation of formalized documents or statistical reports; forecasting court decisions; predictive jurisprudence. Despite the promise of using artificial intelligence, there are a number of problems associated with a low level of reliability in predicting rare events, self-excitation of the system, opacity of the algorithms and architecture used, etc.


Author(s):  
Martin Partington

This chapter discusses the role both of those professionally qualified to practise law—solicitors and barristers—and of other groups who provide legal/advice services but who do not have professional legal qualifications. It examines how regulation of legal services providers is changing. It notes new forms of legal practice. It also considers how use of artificial intelligence may change the ways in which legal services are delivered. It reflects on the adjudicators and other dispute resolvers who play a significant role in the working of the legal system. It reflects on the contribution to legal education made by law teachers, in universities and in private colleges, to the formation of the legal profession and to the practice of the law.


2019 ◽  
pp. 026666691989341
Author(s):  
Di Cui ◽  
Fang Wu

With support from government and business, artificial intelligence is growing quickly in China. However, little is known of how media use shapes the Chinese public’s perception of artificial intelligence. Based on a national online survey (N = 738), this pilot study explored the linkages between media use and people’s risk perception, benefit perception, and policy support of artificial intelligence. Results showed that respondents perceive artificial intelligence as more beneficial than risky. Newspaper use was negatively associated with benefit perception and policy support, whereas television and WeChat use positively predicted both. Analyses of interaction effects showed that personal relevance could partly mitigate the influence of media use.


2021 ◽  
pp. 255-290
Author(s):  
Martin Partington

This chapter discusses the role both of those professionally qualified to practise law—solicitors and barristers—and of other groups who provide legal/advice services but who do not have professional legal qualifications. It examines how regulation of legal services providers is changing and the objects of regulations. It notes the development of new forms of legal practice. It also considers how the use of artificial intelligence may change the ways in which legal services are delivered. The chapter reflects on the adjudicators and other dispute resolvers who play a significant role in the working of the legal system, and on the contribution to legal education made by law teachers, in universities and in private colleges, to the formation of the legal profession and to the practice of the law.


Author(s):  
Martin Partington

This chapter discusses the role both of those professionally qualified to practise law—solicitors and barristers—and of other groups who provide legal/advice services but who do not have professional legal qualifications. It examines how regulation of legal services providers is changing and the objects of regulations. It notes the development of new forms of legal practice. It also considers how the use of artificial intelligence may change the ways in which legal services are delivered. The chapter reflects on the adjudicators and other dispute resolvers who play a significant role in the working of the legal system, and on the contribution to legal education made by law teachers, in universities and in private colleges, to the formation of the legal profession and to the practice of the law.


2018 ◽  
Author(s):  
Ben Einhouse

Cornell Law School J.D. Student Research Papers. 38Advances in technology have surely made the practice of law more efficient, but looming advances in artificial intelligence should raise some concern about the price of this efficiency. Artificial intelligence programs already exhibit the capacity to replace the daily activities of some lawyers, which should raise some concern in the legal community, especially regarding legal ethics. Despite these concerns, the access to knowledge that artificial intelligence programs provide are a huge asset to the legal community, so we must regulate such programs properly. To frame this discussion, the type of artificial intelligence programs that are raising concern need to be identified. Then, the legal framework of what constitutes legal advice and malpractice will be examined, and how this framework might be applied to artificial intelligence programs. Finally, some general best practices for the future of artificial intelligence regulation as it pertains to legal ethics and malpractice will be discussed.


2020 ◽  
Author(s):  
Nejc Plohl ◽  
Bojan Musil

The ongoing coronavirus pandemic is one of the biggest health crises of our time. In response to this global problem, various institutions around the world had soon issued evidence-based prevention guidelines. However, these guidelines, which were designed to slow the spread of COVID-19 and contribute to public well-being, are deliberately disregarded or ignored by some individuals. In the present study, we aimed to develop and test a multivariate model that could help us identify individual characteristics that make a person more/less likely to comply with COVID-19 prevention guidelines. A total of 617 participants took part in the online survey and answered several questions related to socio-demographic variables, political conservatism, religious orthodoxy, conspiracy ideation, intellectual curiosity, trust in science, COVID-19 risk perception and compliance with COVID-19 prevention guidelines. The results of structural equation modeling (SEM) show that COVID-19 risk perception and trust in science both independently predict compliance with COVID-19 prevention guidelines, while the remaining variables in the model (political conservatism, religious orthodoxy, conspiracy ideation and intellectual curiosity) do so via the mediating role of trust in science. The described model exhibited an acceptable fit (χ2(1611) = 2485.84, p < .001, CFI = .91, RMSEA = .032, SMR = .055). These findings thus provide empirical support for the proposed multivariate model and underline the importance of trust in science in explaining the different levels of compliance with COVID-19 prevention guidelines.


2020 ◽  
Vol 8 (1) ◽  
pp. 1-13
Author(s):  
Ana Laura Lira Cortes ◽  
Carlos Fuentes Silva

This work presents research based on evidence with neural networks for the development of predictive crime models, finding the data sets used are focused on historical crime data, crime classification, types of theft at different scales of space and time, counting crime and conflict points in urban areas. Among some results, 81% precision is observed in the prediction of the Neural Network algorithm and ranges in the prediction of crime occurrence at a space-time point between 75% and 90% using LSTM (Long-ShortSpace-Time). It is also observed in this review, that in the field of justice, systems based on intelligent technologies have been incorporated, to carry out activities such as legal advice, prediction and decisionmaking, national and international cooperation in the fight against crime, police and intelligence services, control systems with facial recognition, search and processing of legal information, predictive surveillance, the definition of criminal models under the criteria of criminal records, history of incidents in different regions of the city, location of the police force, established businesses, etc., that is, they make predictions in the urban context of public security and justice. Finally, the ethical considerations and principles related to predictive developments based on artificial intelligence are presented, which seek to guarantee aspects such as privacy, privacy and the impartiality of the algorithms, as well as avoid the processing of data under biases or distinctions. Therefore, it is concluded that the scenario for the development, research, and operation of predictive crime solutions with neural networks and artificial intelligence in urban contexts, is viable and necessary in Mexico, representing an innovative and effective alternative that contributes to the attention of insecurity, since according to the indices of intentional homicides, the crime rates of organized crime and violence with firearms, according to statistics from INEGI, the Global Peace Index and the Government of Mexico, remain in increase.


2021 ◽  
Author(s):  
Sven Gruener

This paper analyzes the susceptibility to misinformation in a survey experiment by considering three hand-picked topics (climate change, Covid-19, and artificial intelligence). Subjects had to rate the reliability of several statements within these fields. We find evidence for a monological belief system (i.e., being susceptible to one statement containing misinformation is correlated with falling to other false news stories). Moreover, trust in social networks is positively associated with falling for misinformation. Whereas, there is some evidence that risk perception, willingness to think deliberately, actively open-minded thinking, and trust in science and media protects against being susceptible to misinformation. Surprisingly, the level of education does not seem to matter much.


2018 ◽  
Vol 77 (4) ◽  
pp. 149-157 ◽  
Author(s):  
Benno G. Wissing ◽  
Marc-André Reinhard

Abstract. This cross-sectional study (N = 325) investigated the relationship between the Dark Triad personality traits and the perception of artificial intelligence (AI) risk. Narrow AI risk perception was measured based on recently identified perceived risks in the public. Artificial general intelligence (AGI) risk perception was operationalized in terms of plausibility ratings and subjective probability estimates on deceptive AI scenarios developed by Bostrom (2014) , in which AI-sided deception is described as a function of intelligence. Machiavellianism and psychopathy predicted narrow AI risk perception above the shared variance of the Dark Triad and above the Big Five. In individuals with self-reported knowledge of machine learning, the Dark Triad traits were associated with AGI risk perception. This study provides evidence for the existence of substantial individual differences in the risk perception of AI.


Sign in / Sign up

Export Citation Format

Share Document