scholarly journals Patient perceptions on data sharing and applying artificial intelligence to healthcare data: a cross sectional survey (Preprint)

Author(s):  
Ravi Aggarwal ◽  
Soma Farag ◽  
Guy Martin ◽  
Hutan Ashrafian ◽  
Ara Darzi
2020 ◽  
Author(s):  
Ravi Aggarwal ◽  
Soma Farag ◽  
Guy Martin ◽  
Hutan Ashrafian ◽  
Ara Darzi

BACKGROUND Considerable research is being conducted as to how artificial intelligence (AI) can be effectively applied to health care. However, for the successful implementation of AI, large amounts of health data are required for training and testing algorithms. As such, there is a need to understand the perspectives and viewpoints of patients regarding the use of their health data in AI research. OBJECTIVE We surveyed a large sample of patients for identifying current awareness regarding health data research, and for obtaining their opinions and views on data sharing for AI research purposes, and on the use of AI technology on health care data. METHODS A cross-sectional survey with patients was conducted at a large multisite teaching hospital in the United Kingdom. Data were collected on patient and public views about sharing health data for research and the use of AI on health data. RESULTS A total of 408 participants completed the survey. The respondents had generally low levels of prior knowledge about AI. Most were comfortable with sharing health data with the National Health Service (NHS) (318/408, 77.9%) or universities (268/408, 65.7%), but far fewer with commercial organizations such as technology companies (108/408, 26.4%). The majority endorsed AI research on health care data (357/408, 87.4%) and health care imaging (353/408, 86.4%) in a university setting, provided that concerns about privacy, reidentification of anonymized health care data, and consent processes were addressed. CONCLUSIONS There were significant variations in the patient perceptions, levels of support, and understanding of health data research and AI. Greater public engagement levels and debates are necessary to ensure the acceptability of AI research and its successful integration into clinical practice in future.


Author(s):  
Hernan Chinsk ◽  
Ricardo Lerch ◽  
Damián Tournour ◽  
Luis Chinski ◽  
Diego Caruso

AbstractDuring rhinoplasty consultations, surgeons typically create a computer simulation of the expected result. An artificial intelligence model (AIM) can learn a surgeon's style and criteria and generate the simulation automatically. The objective of this study is to determine if an AIM is capable of imitating a surgeon's criteria to generate simulated images of an aesthetic rhinoplasty surgery. This is a cross-sectional survey study of resident and specialist doctors in otolaryngology conducted in the month of November 2019 during a rhinoplasty conference. Sequential images of rhinoplasty simulations created by a surgeon and by an AIM were shown at random. Participants used a seven-point Likert scale to evaluate their level of agreement with the simulation images they were shown, with 1 indicating total disagreement and 7 total agreement. Ninety-seven of 122 doctors agreed to participate in the survey. The median level of agreement between the participant and the surgeon was 6 (interquartile range or IQR 5–7); between the participant and the AIM it was 5 (IQR 4–6), p-value < 0.0001. The evaluators were in total or partial agreement with the results of the AIM's simulation 68.4% of the time (95% confidence interval or CI 64.9–71.7). They were in total or partial agreement with the surgeon's simulation 77.3% of the time (95% CI 74.2–80.3). An AIM can emulate a surgeon's aesthetic criteria to generate a computer-simulated image of rhinoplasty. This can allow patients to have a realistic approximation of the possible results of a rhinoplasty ahead of an in-person consultation. The level of evidence of the study is 4.


2021 ◽  
Vol 15 (12) ◽  
pp. 3555-3558
Author(s):  
Isma Sajjad ◽  
Yawar Ali Abidi ◽  
Nabeel Baig ◽  
Humera Akhlak ◽  
Maham Muneeb Lone ◽  
...  

Background: Every single field preferred artificial intelligence with great passion and thereby the discipline of dental science is no exemption. Aims: To evaluate the awareness and perception of dentists regarding artificial intelligence among dentists working in Karachi Methods: The current online cross-sectional survey conducted in Karachi during july 2021 . The survey included house officers, post-graduate trainees, and general dental practitioner and specialist consultant dental surgeons of either gender. A questionnaire was adopted from an existing similar study and modifications were made according to our settings. The link of survey was created using Google Docs and disseminated through various open social media groups of dental practitioner in Karachi. Results: Total 118 complete responses were received with almost equal responses from males (n=56, 47.5%) and females (n=52.5%). The mean age of study participants was 30.3±5.9 years. 83(70.3%) had awareness of the artificial intelligence driven tools in dentistry. 75.9%, 77.1%, 10.8%, 28.9%, 39.8%, 2.4% and 10.8% reported the use of digital intraoral radiographs, CAD-CAM, CBCT, digital dental records, clinical decision support system and none of the tool in their practice respectively. All of the participants had opinion that AI applications should be part of dental trainings. Conclusion: The present survey showed that the majority had awareness of AI applications in dentistry and had positive perception regarding its future role but there was lacking in the utilization rate of AI tools in their practice. Therefore, it is recommended to attend AI trainings to bring and adapt the AI related changes in local settings. Keywords: Artificial intelligence, dentistry, online survey, perception, awareness, Karachi


Author(s):  
Caroline A. Nelson ◽  
Swapna Pachauri ◽  
Rosie Balk ◽  
Jeffrey Miller ◽  
Rushan Theunis ◽  
...  

BMJ Open ◽  
2020 ◽  
Vol 10 (5) ◽  
pp. e038887
Author(s):  
Maximilian Siebert ◽  
Jeanne Fabiola Gaba ◽  
Laura Caquelin ◽  
Henri Gouraud ◽  
Alain Dupuy ◽  
...  

ObjectiveTo explore the implementation of the International Committee of Medical Journal Editors (ICMJE) data-sharing policy which came into force on 1 July 2018 by ICMJE-member journals and by ICMJE-affiliated journals declaring they follow the ICMJE recommendations.DesignA cross-sectional survey of data-sharing policies in 2018 on journal websites and in data-sharing statements in randomised controlled trials (RCTs).SettingICMJE website; PubMed/Medline.Eligibility criteriaICMJE-member journals and 489 ICMJE-affiliated journals that published an RCT in 2018, had an accessible online website and were not considered as predatory journals according to Beall’s list. One hundred RCTs for member journals and 100 RCTs for affiliated journals with a data-sharing policy, submitted after 1 July 2018.Main outcome measuresThe primary outcome for the policies was the existence of a data-sharing policy (explicit data-sharing policy, no data-sharing policy, policy merely referring to ICMJE recommendations) as reported on the journal website, especially in the instructions for authors. For RCTs, our primary outcome was the intention to share individual participant data set out in the data-sharing statement.ResultsEight (out of 14; 57%) member journals had an explicit data-sharing policy on their website (three were more stringent than the ICMJE requirements, one was less demanding and four were compliant), five (35%) additional journals stated that they followed the ICMJE requirements, and one (8%) had no policy online. In RCTs published in these journals, there were data-sharing statements in 98 out of 100, with expressed intention to share individual patient data reaching 77 out of 100 (77%; 95% CI 67% to 85%). One hundred and forty-five (out of 489) ICMJE-affiliated journals (30%; 26% to 34%) had an explicit data-sharing policy on their website (11 were more stringent than the ICMJE requirements, 85 were less demanding and 49 were compliant) and 276 (56%; 52% to 61%) merely referred to the ICMJE requirements. In RCTs published in affiliated journals with an explicit data-sharing policy, data-sharing statements were rare (25%), and expressed intentions to share data were found in 22% (15% to 32%).ConclusionThe implementation of ICMJE data-sharing requirements in online journal policies was suboptimal for ICMJE-member journals and poor for ICMJE-affiliated journals. The implementation of the policy was good in member journals and of concern for affiliated journals. We suggest the conduct of continuous audits of medical journal data-sharing policies in the future.RegistrationThe protocol was registered before the start of the research on the Open Science Framework (https://osf.io/n6whd/).


2010 ◽  
Vol 47 (10) ◽  
pp. 1237-1244 ◽  
Author(s):  
Helene R. Voogdt-Pruis ◽  
Anton P.M. Gorgels ◽  
Jan W. van Ree ◽  
Elisabeth F.M. van Hoef ◽  
George H.M.I. Beusmans

2021 ◽  
Author(s):  
Claire Woodcock ◽  
Brent Mittelstadt ◽  
Dan Busbridge ◽  
Grant Blank

BACKGROUND Artificial intelligence (AI)–driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver health care more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including <i>why</i>-explanations and <i>how</i>-explanations. Social theories suggest that <i>why</i>-explanations are better at communicating knowledge and cultivating trust among laypeople. OBJECTIVE The aim of this study is to ascertain whether explanations provided by a symptom checker affect explanatory trust among laypeople and whether this trust is impacted by their existing knowledge of disease. METHODS A cross-sectional survey of 750 healthy participants was conducted. The participants were shown a video of a chatbot simulation that resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either because of its current use in symptom checkers or because it was informed by theories of contrastive explanation. Exploratory factor analysis of participants’ responses followed by comparison-of-means tests were used to evaluate group differences in trust. RESULTS Depending on the treatment group, two or three variables were generated, reflecting the prior knowledge and subsequent mental model that the participants held. When varying explanation type by disease, migraine was found to be nonsignificant (<i>P</i>=.65) and temporal arteritis, marginally significant (<i>P</i>=.09). Varying disease by explanation type resulted in statistical significance for input influence (<i>P</i>=.001), social proof (<i>P</i>=.049), and no explanation (<i>P</i>=.006), with counterfactual explanation (<i>P</i>=.053). The results suggest that trust in explanations is significantly affected by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate that to be successful, symptom checkers need to tailor explanations to each user’s specific question and discount the diseases that they may also be aware of. CONCLUSIONS System builders developing explanations for symptom-checking apps should consider the recipient’s knowledge of a disease and tailor explanations to each user’s specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.


BMJ Open ◽  
2017 ◽  
Vol 7 (4) ◽  
pp. e014603 ◽  
Author(s):  
Ruth Costello ◽  
Rikesh Patel ◽  
Jennifer Humphreys ◽  
John McBeth ◽  
William G Dixon

Sign in / Sign up

Export Citation Format

Share Document