Ethics and Artificial Intelligence in Public Health Social Work

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Kathleen Murphy ◽  
Erica Di Ruggiero ◽  
Ross Upshur ◽  
Donald J. Willison ◽  
Neha Malhotra ◽  
...  

Abstract Background Artificial intelligence (AI) has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: What ethical issues have been identified in relation to AI in the field of health, including from a global health perspective? Methods Eight electronic databases were searched for peer reviewed and grey literature published before April 2018 using the concepts of health, ethics, and AI, and their related terms. Records were independently screened by two reviewers and were included if they reported on AI in relation to health and ethics and were written in the English language. Data was charted on a piloted data charting form, and a descriptive and thematic analysis was performed. Results Upon reviewing 12,722 articles, 103 met the predetermined inclusion criteria. The literature was primarily focused on the ethics of AI in health care, particularly on carer robots, diagnostics, and precision medicine, but was largely silent on ethics of AI in public and population health. The literature highlighted a number of common ethical concerns related to privacy, trust, accountability and responsibility, and bias. Largely missing from the literature was the ethics of AI in global health, particularly in the context of low- and middle-income countries (LMICs). Conclusions The ethical issues surrounding AI in the field of health are both vast and complex. While AI holds the potential to improve health and health systems, our analysis suggests that its introduction should be approached with cautious optimism. The dearth of literature on the ethics of AI within LMICs, as well as in public health, also points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Ania Syrowatka ◽  
Masha Kuznetsova ◽  
Ava Alsubai ◽  
Adam L. Beckman ◽  
Paul A. Bain ◽  
...  

AbstractArtificial intelligence (AI) represents a valuable tool that could be widely used to inform clinical and public health decision-making to effectively manage the impacts of a pandemic. The objective of this scoping review was to identify the key use cases for involving AI for pandemic preparedness and response from the peer-reviewed, preprint, and grey literature. The data synthesis had two parts: an in-depth review of studies that leveraged machine learning (ML) techniques and a limited review of studies that applied traditional modeling approaches. ML applications from the in-depth review were categorized into use cases related to public health and clinical practice, and narratively synthesized. One hundred eighty-three articles met the inclusion criteria for the in-depth review. Six key use cases were identified: forecasting infectious disease dynamics and effects of interventions; surveillance and outbreak detection; real-time monitoring of adherence to public health recommendations; real-time detection of influenza-like illness; triage and timely diagnosis of infections; and prognosis of illness and response to treatment. Data sources and types of ML that were useful varied by use case. The search identified 1167 articles that reported on traditional modeling approaches, which highlighted additional areas where ML could be leveraged for improving the accuracy of estimations or projections. Important ML-based solutions have been developed in response to pandemics, and particularly for COVID-19 but few were optimized for practical application early in the pandemic. These findings can support policymakers, clinicians, and other stakeholders in prioritizing research and development to support operationalization of AI for future pandemics.


Author(s):  
Anil Babu Payedimarri ◽  
Diego Concina ◽  
Luigi Portinale ◽  
Massimo Canonico ◽  
Deborah Seys ◽  
...  

Artificial Intelligence (AI) and Machine Learning (ML) have expanded their utilization in different fields of medicine. During the SARS-CoV-2 outbreak, AI and ML were also applied for the evaluation and/or implementation of public health interventions aimed to flatten the epidemiological curve. This systematic review aims to evaluate the effectiveness of the use of AI and ML when applied to public health interventions to contain the spread of SARS-CoV-2. Our findings showed that quarantine should be the best strategy for containing COVID-19. Nationwide lockdown also showed positive impact, whereas social distancing should be considered to be effective only in combination with other interventions including the closure of schools and commercial activities and the limitation of public transportation. Our findings also showed that all the interventions should be initiated early in the pandemic and continued for a sustained period. Despite the study limitation, we concluded that AI and ML could be of help for policy makers to define the strategies for containing the COVID-19 pandemic.


Author(s):  
Bruce Mellado ◽  
Jianhong Wu ◽  
Jude Dzevela Kong ◽  
Nicola Luigi Bragazzi ◽  
Ali Asgary ◽  
...  

COVID-19 is imposing massive health, social and economic costs. While many developed countries have started vaccinating, most African nations are waiting for vaccine stocks to be allocated and are using clinical public health (CPH) strategies to control the pandemic. The emergence of variants of concern (VOC), unequal access to the vaccine supply and locally specific logistical and vaccine delivery parameters, add complexity to national CPH strategies and amplify the urgent need for effective CPH policies. Big data and artificial intelligence machine learning techniques and collaborations can be instrumental in an accurate, timely, locally nuanced analysis of multiple data sources to inform CPH decision-making, vaccination strategies and their staged roll-out. The Africa-Canada Artificial Intelligence and Data Innovation Consortium (ACADIC) has been established to develop and employ machine learning techniques to design CPH strategies in Africa, which requires ongoing collaboration, testing and development to maximize the equity and effectiveness of COVID-19-related CPH interventions.


2021 ◽  
Vol 5 (9) ◽  
pp. RV1-RV5
Author(s):  
Sahrish Tariq ◽  
Nidhi Gupta ◽  
Preety Gupta ◽  
Aditi Sharma

The educational needs must drive the development of the appropriate technology”. They should not be viewed as toys for enthusiasts. Nevertheless, the human element must never be dismissed. Scientific research will continue to offer exciting technologies and effective treatments. For the profession and the patients, it serves to benefit fully from modern science, new knowledge and technologies must be incorporated into the mainstream of dental education. The technologies of modern science have astonished and intrigued our imagination. Correct diagnosis is the key to a successful clinical practice. In this regard, adequately trained neural networks can be a boon to diagnosticians, especially in conditions having multifactorial etiology.


2021 ◽  
Author(s):  
Wai-Kit Ming ◽  
Taoran Liu ◽  
Winghei Tsang ◽  
Yifei Xie ◽  
Kang Tian ◽  
...  

BACKGROUND The COVID-19 pandemic poses a great threat to the public health system globally and has squeezed medical and doctor resources. Artificial intelligence (AI) has potential uses in virus detection and relieving the public health pressure caused by the pandemic. In the case of a shortage of medical resources caused by the pandemic, whether people’s preference for AI doctors and traditional clinicians has changed is worth exploring. OBJECTIVE We aim to quantify and compare people’s preference for AI medicine and traditional clinicians before and after the COVID-19 pandemic to check whether people’s preference is affected by the pressure of pandemic METHODS The propensity score matching (PSM) method was applied to match two different groups of respondents recruited in 2017 and 2020 with similar demographic characteristics. A total of 2048 respondents (1520 from 2017 and 528 from 2020) completed the questionnaire and were included in the analysis. The Multinomial Logit Model (MNL) and Latent Class Model (LCM) were used to explore people’s preferences for different diagnosis methods. RESULTS Among these respondents, 84.7% in 2017 and 91.3% in 2020 were confident that AI diagnosis would outperform human clinician diagnoses in the future. Both groups of respondents matched from 2017 and 2020 attached most importance to the attribute ‘accuracy’, followed by ‘diagnosis expense’, and they prefer the combined diagnosis of AI and human clinicians (2017: odds ratio [OR] 1.645; 95% CI 1.535,1.763, p < 0.001; 2020: OR 1.513, 95% CI 1.413, 1.621, p < 0.001, Reference level: Clinician). LCM identified three classes with different attribute priorities. In Class 1, the preference for combination diagnosis and accuracy remains constant in 2017 and 2020, and higher accuracy (e.g., 2017 OR for 100% 1.357; 95% CI 1.164, 1.581) is preferred. People in 2017 and 2020 prefer 0 min outpatient waiting time and 0 RMB diagnosis expense. In Class 2, the 2017 matched data is also very similar to class 2 in 2020, AI combined with human clinicians (2017: OR 1.204, 95% CI 1.039, 1.394, p = 0.011; 2020: OR 2.009, 95% CI 1.826, 2.211, p < 0.001, Reference level: Clinician) and 20 minutes (2017: OR 1.349, 95% CI 1.065, 1.708, p < 0.001; 2020: OR 1.488, 95% CI 1.287, 1.721, p < 0.001, Reference level, 0 min) of outpatient waiting time were consistently preferred. In Class 3, the respondents in 2017 and 2020 had different preferences for diagnosis method; respondents in Class 3 of 2017 prefer clinicians, whereas respondents in Class 3 of 2020 prefer AI diagnosis. The odds ratios of accuracy continued increasing with the increasing of accuracy, like other classes of 2017 and 2020. As for the latent class segmented according to different sexes, all of the male and female respondent classes from 2017 and 2020 rank accuracy as the most important attribute. CONCLUSIONS Individual preference for clinical diagnosis between AI and human clinicians were very similar and mostly unaffected by the burden of the public health system caused by the pandemic. Diagnosis accuracy and expense for diagnosis were of the most important attributes of choice of the type of diagnosis. These findings can provide guidance for policymaking relevant to the development of AI-based healthcare.


Author(s):  
Robert SPARROW ◽  
Joshua HATHERLEY

LANGUAGE NOTE | Document text in English; abstract also in Chinese.人工智能(AI)將如何促進人類的醫療保健?如果我們擔心人工智能介入醫療的風險,我們又應該關注什麽呢?本文試圖概述此類問題,並對人工智能介入醫療的風險與希望作一個初步評價。人工智能作為一種研究工具和診斷工具具有巨大的潛力,特別是在基因組學和公共衛生領域中。人工智能在醫療中的廣泛使用可能還會對醫療系統的組織方式和商業實踐產生深刻的影響,而這些影響的方式與程度還沒有被充分認識到。在人工智能醫學的熱情擁護者看來,應用人工智能可以幫助醫生集中精力在對他們和病人而言真正重要的問題上。然而,本文將論證這些樂觀的判斷是基於對現代醫療環境下機構和經濟運行規則的一些不合情理的假設之上。本文將聚焦於如下一 些重要議題:大資料中的隱私、監管和偏見,過分信任機器的風險,透明度問題,醫療專業人士的“去技能化”問題,人工智能重塑醫療保健的方式,以及人工智能對醫療保健中權力分配的影響。其中有兩個關鍵的問題尤其值得哲學家和生命倫理學家的進一步關注。第一,當醫生不僅需要處理人而且需要處理資料的時候,醫療實踐會呈現出什麽樣的形態?第二,在醫療決策權衡中,我們應該给予來自機器的意見以多大的權重?What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices of healthcare systems in ways that are perhaps under-appreciated. Enthusiasts for AI have held out the prospect that it will free physicians up to spend more time attending to what really matters to them and their patients. We will argue that this claim depends upon implausible assumptions about the institutional and economic imperatives operating in contemporary healthcare settings. We will also highlight important concerns about privacy, surveillance, and bias in big data, as well as the risks of over trust in machines, the challenges of transparency, the deskilling of healthcare practitioners, the way AI reframes healthcare, and the implications of AI for the distribution of power in healthcare institutions. We will suggest that two questions, in particular, are deserving of further attention from philosophers and bioethicists. What does care look like when one is dealing with data as much as people? And, what weight should we give to the advice of machines in our own deliberations about medical decisions?DOWNLOAD HISTORY | This article has been downloaded 119 times in Digital Commons before migrating into this platform.


PEDIATRICS ◽  
1957 ◽  
Vol 20 (2) ◽  
pp. 358-361
Author(s):  
Helen M. Wallace ◽  
Amelia Igel ◽  
Margaret A. Losty

Need for a foster home placement program for handicapped children in an urban area was demonstrated by sending a questionnaire to hospitals and convalescent homes, and by careful review of certain children whose inpatient care was being paid for by the official Crippled Children Program. The outstanding fact was that a significant number of handicapped children were being retained in institutions for social, and not medical, reasons. Agreement was reached among social agencies that a co-ordinated community program for foster home placement of handicapped children was necessary but a definitive method was not evolved nor were adequate funds secured to finance costs.


Sign in / Sign up

Export Citation Format

Share Document