scholarly journals Artificial intelligence in thoracic surgery: past, present, perspective and limits

2020 ◽  
Vol 29 (157) ◽  
pp. 200010
Author(s):  
Harry Etienne ◽  
Sarah Hamdi ◽  
Marielle Le Roux ◽  
Juliette Camuset ◽  
Theresa Khalife-Hocquemiller ◽  
...  

Artificial intelligence (AI) technology is becoming prevalent in many areas of everyday life. The healthcare industry is concerned by it even though its widespread use is still limited. Thoracic surgeons should be aware of the new opportunities that could affect their daily practice, by direct use of AI technology or indirect use via related medical fields (radiology, pathology and respiratory medicine). The objective of this article is to review applications of AI related to thoracic surgery and discuss the limits of its application in the European Union. Key aspects of AI will be developed through clinical pathways, beginning with diagnostics for lung cancer, a prognostic-aided programme for decision making, then robotic surgery, and finishing with the limitations of AI, the legal and ethical issues relevant to medicine. It is important for physicians and surgeons to have a basic knowledge of AI to understand how it impacts healthcare, and to consider ways in which they may interact with this technology. Indeed, synergy across related medical specialties and synergistic relationships between machines and surgeons will likely accelerate the capabilities of AI in augmenting surgical care.

BMJ Leader ◽  
2018 ◽  
Vol 2 (2) ◽  
pp. 59-63 ◽  
Author(s):  
Erwin Loh

Artificial intelligence (AI) has the potential to significantly transform the role of the doctor and revolutionise the practice of medicine. This qualitative review paper summarises the past 12 months of health research in AI, across different medical specialties, and discusses the current strengths as well as challenges, relating to this emerging technology. Doctors, especially those in leadership roles, need to be aware of how quickly AI is advancing in health, so that they are ready to lead the change required for its adoption by the health system. Key points: ‘AI has now been shown to be as effective as humans in the diagnosis of various medical conditions, and in some cases, more effective.’ When it comes to predicting suicide attempts, recent research suggest AI is better than human beings. ‘AI’s current strength is in its ability to learn from a large dataset and recognise patterns that can be used to diagnose conditions, putting it in direct competition with medical specialties that are involved in diagnostic tests that involve pattern recognition, such as pathology and radiology’. The current challenges in AI include legal liability and attribution of negligence when errors occur, and the ethical issues relating to patient choices. ‘AI systems can also be developed with, or learn, biases, that will need to be identified and mitigated’. As doctors and health leaders, we need to start preparing the profession to be supported by, partnered with, and, in future, potentially be replaced by, AI and advanced robotics systems.


2020 ◽  
pp. 97-105
Author(s):  
Aleksandra Kusztykiewicz-Fedurek

Political security is very often considered through the prism of individual states. In the scholar literature in-depth analyses of this kind of security are rarely encountered in the context of international entities that these countries integrate. The purpose of this article is to draw attention to key aspects of political security in the European Union (EU) Member States. The EU as a supranational organisation, gathering Member States first, ensures the stability of the EU as a whole, and secondly, it ensures that Member States respect common values and principles. Additionally, the EU institutions focus on ensuring the proper functioning of the Eurozone (also called officially “euro area” in EU regulations). Actions that may have a negative impact on the level of the EU’s political security include the boycott of establishing new institutions conducive to the peaceful coexistence and development of states. These threats seem to have a significant impact on the situation in the EU in the face of the proposed (and not accepted by Member States not belonging to the Eurogroup) Eurozone reforms concerning, inter alia, appointment of the Minister of Economy and Finance and the creation of a new institution - the European Monetary Fund.


This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


Author(s):  
Bryant Walker Smith

This chapter highlights key ethical issues in the use of artificial intelligence in transport by using automated driving as an example. These issues include the tension between technological solutions and policy solutions; the consequences of safety expectations; the complex choice between human authority and computer authority; and power dynamics among individuals, governments, and companies. In 2017 and 2018, the U.S. Congress considered automated driving legislation that was generally supported by many of the larger automated-driving developers. However, this automated-driving legislation failed to pass because of a lack of trust in technologies and institutions. Trustworthiness is much more of an ethical question. Automated vehicles will not be driven by individuals or even by computers; they will be driven by companies acting through their human and machine agents. An essential issue for this field—and for artificial intelligence generally—is how the companies that develop and deploy these technologies should earn people’s trust.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Kathleen Murphy ◽  
Erica Di Ruggiero ◽  
Ross Upshur ◽  
Donald J. Willison ◽  
Neha Malhotra ◽  
...  

Abstract Background Artificial intelligence (AI) has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: What ethical issues have been identified in relation to AI in the field of health, including from a global health perspective? Methods Eight electronic databases were searched for peer reviewed and grey literature published before April 2018 using the concepts of health, ethics, and AI, and their related terms. Records were independently screened by two reviewers and were included if they reported on AI in relation to health and ethics and were written in the English language. Data was charted on a piloted data charting form, and a descriptive and thematic analysis was performed. Results Upon reviewing 12,722 articles, 103 met the predetermined inclusion criteria. The literature was primarily focused on the ethics of AI in health care, particularly on carer robots, diagnostics, and precision medicine, but was largely silent on ethics of AI in public and population health. The literature highlighted a number of common ethical concerns related to privacy, trust, accountability and responsibility, and bias. Largely missing from the literature was the ethics of AI in global health, particularly in the context of low- and middle-income countries (LMICs). Conclusions The ethical issues surrounding AI in the field of health are both vast and complex. While AI holds the potential to improve health and health systems, our analysis suggests that its introduction should be approached with cautious optimism. The dearth of literature on the ethics of AI within LMICs, as well as in public health, also points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.


2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


Author(s):  
Jessica Morley ◽  
Anat Elhalal ◽  
Francesca Garcia ◽  
Libby Kinsey ◽  
Jakob Mökander ◽  
...  

AbstractAs the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’


Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.


2021 ◽  
pp. 147775092110366
Author(s):  
Harika Avula ◽  
Mariana Dittborn ◽  
Joe Brierley

The field of Paediatric Bioethics, or ethical issues applied to children's healthcare, is relatively new but has recently gained an increased professional and public profile. Clinical ethics support to health professionals and patients who face ethical challenges in clinical practice varies between and within institutions. Literature regarding services available to paediatricians is sparse in specialist tertiary centres and almost absent in general paediatrics. We performed a mixed-methods study using online surveys and focus groups to explore the experiences of ethical and legal dilemmas and the support structures available to (i) paediatric intensive care teams as a proxy for specialist children's centres and (ii) paediatricians working in the general setting in the UK. Our main findings illustrate the broad range of ethical and legal challenges experienced by both groups in daily practice. Ethics training and the availability of ethics support were variable in structure, processes, funding and availability, e.g., 70% of paediatric intensive care consultants reported access to formal ethics advice versus 20% general paediatricians. Overall, our findings suggest a need for ethics support and training in both settings. The broad experience reported of ethics support, where it existed, was good – though improvements were suggested. Many clinicians were concerned about their relationship with children and families experiencing a challenging ethical situation, partly as a result of high-profile recent legal cases in the media. Further research in this area would help collect a broader range of views to inform clinical ethics support's development to better support paediatric teams, children and their families.


2020 ◽  
Vol 10 (18) ◽  
pp. 6553
Author(s):  
Sabrina Azzi ◽  
Stéphane Gagnon ◽  
Alex Ramirez ◽  
Gregory Richards

Healthcare is considered as one of the most promising application areas for artificial intelligence and analytics (AIA) just after the emergence of the latter. AI combined to analytics technologies is increasingly changing medical practice and healthcare in an impressive way using efficient algorithms from various branches of information technology (IT). Indeed, numerous works are published every year in several universities and innovation centers worldwide, but there are concerns about progress in their effective success. There are growing examples of AIA being implemented in healthcare with promising results. This review paper summarizes the past 5 years of healthcare applications of AIA, across different techniques and medical specialties, and discusses the current issues and challenges, related to this revolutionary technology. A total of 24,782 articles were identified. The aim of this paper is to provide the research community with the necessary background to push this field even further and propose a framework that will help integrate diverse AIA technologies around patient needs in various healthcare contexts, especially for chronic care patients, who present the most complex comorbidities and care needs.


Sign in / Sign up

Export Citation Format

Share Document