scholarly journals The Conflict Between People’s Urge to Punish AI and Legal Systems

2021 ◽  
Vol 8 ◽  
Author(s):  
Gabriel Lima ◽  
Meeyoung Cha ◽  
Chihyung Jeon ◽  
Kyung Sin Park

Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people’s views of electronic legal personhood vis-à-vis existing liability models. Our study reveals people’s desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents’ punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents’ wrongdoings.

2020 ◽  
Author(s):  
Melissa D McCradden ◽  
Tasmie Sarker ◽  
P Alison Paprica

ABSTRACTObjectivesGiven widespread interest in applying artificial intelligence (AI) to health data to improve patient care and health system efficiency, there is a need to understand the perspectives of the general public regarding the use of health data in AI research.DesignA qualitative study involving six focus groups with members of the public. Participants discussed their views about AI in general, then were asked to share their thoughts about three realistic health AI scenarios. Data were analysed using qualitative description thematic analysis.SettingsTwo cities in Ontario, Canada: Sudbury (400 km north of Toronto) and Mississauga, (part of the Greater Toronto Area).ParticipantsForty-one purposively sampled members of the public (21M:20F, 25-65 years, median age 40).ResultsParticipants had low levels of prior knowledge of AI and mixed, mostly negative, perceptions of AI in general. Most endorsed AI as a tool for the analysis of health data when there is strong potential for public benefit, providing that concerns about privacy, consent, and commercial motives were addressed. Inductive thematic analysis identified AI-specific hopes (e.g., potential for faster and more accurate analyses, ability to use more data), fears (e.g., loss of human touch, skill depreciation from over-reliance on machines) and conditions (e.g., human verification of computer-aided decisions, transparency). There were mixed views about whether consent is required for health data research, with most participants wanting to know if, how and by whom their data were used. Though it was not an objective of the study, realistic health AI scenarios were found to have an educational effect.ConclusionsNotwithstanding concerns and limited knowledge about AI in general, most members of the general public in six focus groups in Ontario, Canada perceived benefits from health AI and conditionally supported the use of health data for AI research.STRENGTHS AND LIMITATIONS OF THIS STUDYA strength of this study is the analysis of how diverse members of the general public perceive three realistic scenarios in which health data are used for AI research.The detailed health AI scenarios incorporate points that previous qualitative research has indicated are likely to elicit discussion (e.g., use of health data without express consent, involvement of commercial organisations in health research, inability to guarantee anonymity of genetic data) and may also be useful in future qualitative research studies and for educational purposes.The findings are likely to be relevant to organisations that are considering making health data available for AI research and development.Notwithstanding the diverse ethnic and educational backgrounds of participants, overall the sample represents the general (mainstream) population of Ontario and results cannot be interpreted as presenting the views of specific subpopulations and may not be generalisable across Ontario or to other settings.Given the low level of knowledge about AI in general it is possible that the views of participants would change substantially if they learned and understood more about AI.TRANSPARENCY STATEMENTP. Alison Paprica affirms that the manuscript is an honest, accurate and transparent account of the study being reported; that no important aspects of the study have been omitted; and that there were no discrepancies from the study as originally approved by the University of Toronto Research Ethics Board.


2020 ◽  
pp. 009539972097089
Author(s):  
Mathias Sabbe ◽  
Nathalie Schiffino ◽  
Stéphane Moyson

Probation officers (POs) operate in a high-risk environment. They are vulnerable to mediatic and political backlash and are confronted with managerial innovations that can conflict with their values. A thematic analysis of 29 interviews with Belgian POs reveals that classical coping mechanisms caused by time shortages, such as rationing and prioritization, are amplified by managerialism. POs also break rules which present limited meaningfulness and routinize offender control to alleviate pressure from accountabilities to both managers and the general public. The study demonstrates that managerialism and accountabilities to the managers, the public, and the politicians model coping mechanisms in high-risk environments.


AI & Society ◽  
2021 ◽  
Author(s):  
Yishu Mao ◽  
Kristin Shi-Kupfer

AbstractThe societal and ethical implications of artificial intelligence (AI) have sparked discussions among academics, policymakers and the public around the world. What has gone unnoticed so far are the likewise vibrant discussions in China. We analyzed a large sample of discussions about AI ethics on two Chinese social media platforms. Findings suggest that participants were diverse, and included scholars, IT industry actors, journalists, and members of the general public. They addressed a broad range of concerns associated with the application of AI in various fields. Some even gave recommendations on how to tackle these issues. We argue that these discussions are a valuable source for understanding the future trajectory of AI development in China as well as implications for global dialogue on AI governance.


BMJ Open ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. e039798
Author(s):  
Melissa D McCradden ◽  
Tasmie Sarker ◽  
P Alison Paprica

ObjectivesGiven widespread interest in applying artificial intelligence (AI) to health data to improve patient care and health system efficiency, there is a need to understand the perspectives of the general public regarding the use of health data in AI research.DesignA qualitative study involving six focus groups with members of the public. Participants discussed their views about AI in general, then were asked to share their thoughts about three realistic health AI research scenarios. Data were analysed using qualitative description thematic analysis.SettingsTwo cities in Ontario, Canada: Sudbury (400 km north of Toronto) and Mississauga (part of the Greater Toronto Area).ParticipantsForty-one purposively sampled members of the public (21M:20F, 25–65 years, median age 40).ResultsParticipants had low levels of prior knowledge of AI and mixed, mostly negative, perceptions of AI in general. Most endorsed using data for health AI research when there is strong potential for public benefit, providing that concerns about privacy, commercial motives and other risks were addressed. Inductive thematic analysis identified AI-specific hopes (eg, potential for faster and more accurate analyses, ability to use more data), fears (eg, loss of human touch, skill depreciation from over-reliance on machines) and conditions (eg, human verification of computer-aided decisions, transparency). There were mixed views about whether data subject consent is required for health AI research, with most participants wanting to know if, how and by whom their data were used. Though it was not an objective of the study, realistic health AI scenarios were found to have an educational effect.ConclusionsNotwithstanding concerns and limited knowledge about AI in general, most members of the general public in six focus groups in Ontario, Canada perceived benefits from health AI and conditionally supported the use of health data for AI research.


2019 ◽  
Author(s):  
Shuqing Gao ◽  
Lingnan He ◽  
Yue Chen ◽  
Dan Li ◽  
Kaisheng Lai

BACKGROUND High-quality medical resources are in high demand worldwide, and the application of artificial intelligence (AI) in medical care may help alleviate the crisis related to this shortage. The development of the medical AI industry depends to a certain extent on whether industry experts have a comprehensive understanding of the public’s views on medical AI. Currently, the opinions of the general public on this matter remain unclear. OBJECTIVE The purpose of this study is to explore the public perception of AI in medical care through a content analysis of social media data, including specific topics that the public is concerned about; public attitudes toward AI in medical care and the reasons for them; and public opinion on whether AI can replace human doctors. METHODS Through an application programming interface, we collected a data set from the Sina Weibo platform comprising more than 16 million users throughout China by crawling all public posts from January to December 2017. Based on this data set, we identified 2315 posts related to AI in medical care and classified them through content analysis. RESULTS Among the 2315 identified posts, we found three types of AI topics discussed on the platform: (1) technology and application (n=987, 42.63%), (2) industry development (n=706, 30.50%), and (3) impact on society (n=622, 26.87%). Out of 956 posts where public attitudes were expressed, 59.4% (n=568), 34.4% (n=329), and 6.2% (n=59) of the posts expressed positive, neutral, and negative attitudes, respectively. The immaturity of AI technology (27/59, 46%) and a distrust of related companies (n=15, 25%) were the two main reasons for the negative attitudes. Across 200 posts that mentioned public attitudes toward replacing human doctors with AI, 47.5% (n=95) and 32.5% (n=65) of the posts expressed that AI would completely or partially replace human doctors, respectively. In comparison, 20.0% (n=40) of the posts expressed that AI would not replace human doctors. CONCLUSIONS Our findings indicate that people are most concerned about AI technology and applications. Generally, the majority of people held positive attitudes and believed that AI doctors would completely or partially replace human ones. Compared with previous studies on medical doctors, the general public has a more positive attitude toward medical AI. Lack of trust in AI and the absence of the humanistic care factor are essential reasons why some people still have a negative attitude toward medical AI. We suggest that practitioners may need to pay more attention to promoting the credibility of technology companies and meeting patients’ emotional needs instead of focusing merely on technical issues.


10.2196/16649 ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. e16649 ◽  
Author(s):  
Shuqing Gao ◽  
Lingnan He ◽  
Yue Chen ◽  
Dan Li ◽  
Kaisheng Lai

Background High-quality medical resources are in high demand worldwide, and the application of artificial intelligence (AI) in medical care may help alleviate the crisis related to this shortage. The development of the medical AI industry depends to a certain extent on whether industry experts have a comprehensive understanding of the public’s views on medical AI. Currently, the opinions of the general public on this matter remain unclear. Objective The purpose of this study is to explore the public perception of AI in medical care through a content analysis of social media data, including specific topics that the public is concerned about; public attitudes toward AI in medical care and the reasons for them; and public opinion on whether AI can replace human doctors. Methods Through an application programming interface, we collected a data set from the Sina Weibo platform comprising more than 16 million users throughout China by crawling all public posts from January to December 2017. Based on this data set, we identified 2315 posts related to AI in medical care and classified them through content analysis. Results Among the 2315 identified posts, we found three types of AI topics discussed on the platform: (1) technology and application (n=987, 42.63%), (2) industry development (n=706, 30.50%), and (3) impact on society (n=622, 26.87%). Out of 956 posts where public attitudes were expressed, 59.4% (n=568), 34.4% (n=329), and 6.2% (n=59) of the posts expressed positive, neutral, and negative attitudes, respectively. The immaturity of AI technology (27/59, 46%) and a distrust of related companies (n=15, 25%) were the two main reasons for the negative attitudes. Across 200 posts that mentioned public attitudes toward replacing human doctors with AI, 47.5% (n=95) and 32.5% (n=65) of the posts expressed that AI would completely or partially replace human doctors, respectively. In comparison, 20.0% (n=40) of the posts expressed that AI would not replace human doctors. Conclusions Our findings indicate that people are most concerned about AI technology and applications. Generally, the majority of people held positive attitudes and believed that AI doctors would completely or partially replace human ones. Compared with previous studies on medical doctors, the general public has a more positive attitude toward medical AI. Lack of trust in AI and the absence of the humanistic care factor are essential reasons why some people still have a negative attitude toward medical AI. We suggest that practitioners may need to pay more attention to promoting the credibility of technology companies and meeting patients’ emotional needs instead of focusing merely on technical issues.


ALQALAM ◽  
2016 ◽  
Vol 33 (1) ◽  
pp. 46
Author(s):  
Aswadi Lubis

The purpose of writing this article is to describe the agency problems that arise in the application of the financing with mudharabah on Islamic banking. In this article the author describes the use of the theory of financing, asymetri information, agency problems inside of financing. The conclusion of this article is that the financing is asymmetric information problems will arise, both adverse selection and moral hazard. The high risk of prospective managers (mudharib) for their moral hazard and lack of readiness of human resources in Islamic banking is among the factors that make the composition of the distribution of funds to the public more in the form of financing. The limitations that can be done to optimize this financing is among other things; owners of capital supervision (monitoring) and the customers themselves place restrictions on its actions (bonding).


AI Magazine ◽  
2019 ◽  
Vol 40 (4) ◽  
pp. 3-5
Author(s):  
Ching-Hua Chen ◽  
James Hendler ◽  
Sabbir Rashid ◽  
Oshani Seneviratne ◽  
Daby Sow ◽  
...  

This editorial introduces the special topic articles on reflections on successful research in artificial intelligence. Consisting of a combination of interviews and full-length articles, the special topic articles examine the meaning of success and metrics of success from a variety of perspectives. Our editorial team is especially excited about this topic, because we are in an era when several of the aspirations of early artificial intelligence researchers and futurists seem to be within reach of the general public. This has spurred us to reflect on, and re-examine, our social and scientific motivations for promoting the use of artificial intelligence in governments, enterprises, and in our lives.


Author(s):  
Eddy Suwito

The development of technology that continues to grow, the public increasingly facilitates socialization through technology. Opinion on free and uncontrolled social media causes harm to others. The law sees this phenomenon subsequently changing. Legal Information Known as Information and Electronic Transaction Law or ITE Law. However, the ITE Law cannot protect the entire general public. Because it is an Article in the ITE Law that is contrary to Article in the 1945 Constitution of the Republic of Indonesia.


2020 ◽  
Author(s):  
Mayda Alrige ◽  
Hind Bitar Bitar ◽  
Maram Meccawi ◽  
Balakrishnan Mullachery

BACKGROUND Designing a health promotion campaign is never an easy task, especially during a pandemic of a highly infectious disease, such as Covid-19. In Saudi Arabia, many attempts have been made toward raising the public awareness about Covid-19 infection-level and its precautionary health measures that have to be taken. Although this is useful, most of the health information delivered through the national dashboard and the awareness campaign are very generic and not necessarily make the impact we like to see on individuals’ behavior. OBJECTIVE The objective of this study is to build and validate a customized awareness campaign to promote precautionary health behavior during the COVID-19 pandemic. The customization is realized by utilizing a geospatial artificial intelligence technique called Space-Time Cube (STC) technique. METHODS This research has been conducted in two sequential phases. In the first phase, an initial library of thirty-two messages was developed and validated to promote precautionary messages during the COVID-19 pandemic. This phase was guided by the Fogg Behavior Model (FBM) for behavior change. In phase 2, we applied STC as a Geospatial Artificial Intelligence technique to create a local map for one city representing three different profiles for the city districts. The model was built using COVID-19 clinical data. RESULTS Thirty-two messages were developed based on resources from the World Health Organization and the Ministry of Health in Saudi Arabia. The enumerated content validity of the messages was established through the utilization of Content Validity Index (CVI). Thirty-two messages were found to have acceptable content validity (I-CVI=.87). The geospatial intelligence technique that we used showed three profiles for the districts of Jeddah city: one for high infection, another for moderate infection, and the third for low infection. Combining the results from the first and second phases, a customized awareness campaign was created. This awareness campaign would be used to educate the public regarding the precautionary health behaviors that should be taken, and hence help in reducing the number of positive cases in the city of Jeddah. CONCLUSIONS This research delineates the two main phases to developing a health awareness messaging campaign. The messaging campaign, grounded in FBM, was customized by utilizing Geospatial Artificial Intelligence to create a local map with three district profiles: high-infection, moderate-infection, and low-infection. Locals of each district will be targeted by the campaign based on the level of infection in their district as well as other shared characteristics. Customizing health messages is very prominent in health communication research. This research provides a legitimate approach to customize health messages during the pandemic of COVID-19.


Sign in / Sign up

Export Citation Format

Share Document