scholarly journals Using artificial intelligence in healthcare: Allocating liability and risks

2021 ◽  
Vol 2 (4) ◽  
pp. 51-60
Author(s):  
E. P. Tretyakova

Within the framework of this article, the author considers the features regarding the application and use of artificial intelligence (AI) in medical practice. This includes complex issues related to the personal liability of a doctor when making decisions on diagnostics and treatment based on an algorithm proposal (a system for supporting medical decisions), as well as possible options for the responsibility of the algorithm (AI) developer. The analysis provides an overview of the existing system for holding medical professionals accountable, as well as an assessment of possible options for the distribution of responsibility in connection with the widespread introduction of AI into the work of doctors alongside the possible introduction of AI into standard medical care. The author considers the possibility of establishing more serious requirements for the collection of information on the side effects of such devices for an AI registered as a medical device. Using the method of legal analysis and the comparative legal method, the author analyzes the current global trends in the distribution of responsibility for harm in such cases where there is an error and/or inaccuracy in making a medical decision; as a result of this, the author demonstrates possible options for the distribution of the roles of the healthcare professional and AI in the near future.

2021 ◽  
Vol 14 (02) ◽  
pp. 761-769
Author(s):  
Nataliia Maika ◽  
Natalia Kalyniuk ◽  
Valentyna Sloma ◽  
Liudmyla Sheremeta ◽  
Leonid Kravchuk ◽  
...  

The feasibility of training future medical professionals on the basis of interdisciplinary integration is explored in the article. Analyzed through the lens of medical knowledge in legal knowledge, drug reimbursement as a process by which the health care system affects the availability of medicines and medical services to the public. The peculiarities of drug reimbursement in Ukraine have been investigated using the comparative legal method.


2021 ◽  
pp. 174702182110520
Author(s):  
Sumitava Mukherjee ◽  
Divya Reji

Outcomes of clinical trials need to be communicated effectively to make decisions that save lives. We investigated whether framing can bias these decisions and if risk preferences shift depending on the number of patients. Hypothetical information about two medicines used in clinical trials having a sure or a risky outcome was presented in either a gain frame (people would be saved) or a loss frame (people would die). The number of patients who signed up for the clinical trials was manipulated in both frames in all the experiments. Using an unnamed disease, lay participants (experiment 1) and would-be medical professionals (experiment 2) were asked to choose which medicine they would have administered. For COVID-19, lay participants were asked which medicine should medical professionals (experiment 3), artificially intelligent software (experiment 4), and they themselves (experiment 5) favor to be administered. Broadly consistent with prospect theory, people were more risk-seeking in the loss frames than the gain frames. Risk-aversion in gain frames was sensitive to the number of lives with risk-neutrality at low magnitudes and risk-aversion at high magnitudes. In the loss frame, participants were mostly risk-seeking. This pattern was consistent across laypersons and medical professionals, further extended to preferences for choices that medical professionals and artificial intelligence programs should make in the context of COVID-19. These results underscore how medical decisions can be impacted by the number of lives at stake and reveal inconsistent risk preferences for clinical trials during a real pandemic.


2021 ◽  
Vol 7 (2) ◽  
pp. 50-52
Author(s):  
I.A. Shaderkin ◽  
◽  

Introduction. Recently, a large number of intelligent systems have begun to appear that are used to support medical decision- making – «artificial intelligence in medicine». Material and methods. The author of the publication also works on the issues of making medical decisions, and, being a doctor himself, in the course of his work and existing practice, discovered a number of important issues that he considered necessary to share with the professional community. Results. In some cases, the demonstration of the successful operation of the software in the declared characteristics (sensitivity, specificity) occurs only in the «reliable hands» of the developers and on the data that underlie the software. When attempting to demonstrate performance in clinical situations, the claimed characteristics are often not achieved, so the clinical community that must use this AI-based solution does not always form a favorable opinion. The author considers various types of errors that can be fatal in making medical clinical decisions – distortion of primary medical knowledge, lack of knowledge or inaccurate knowledge about the subject area, social distortions. Conclusions. When developing solutions based on AI, it seems important to keep the above points in mind for both developers and users.


Author(s):  
Gulfia G. Kamalova ◽  

The subject of the work is the status and prospects of criminal law liability for various violations of the special legal regime of legally protected secrets. Considering the specifics of criminal liability for violation of confidentiality of information, experts traditionally focus on certain legally protected secrets and a detailed analysis of the relevant corpus delicti, which does not allow to cover the problem of protection of information of limited access by criminal law means as a whole. The author of this article performed a comprehensive legal analysis of criminal liability for violation of legal regime of secrets protected by law and other restricted information on the basis of a set of unlawful acts of non-compliance with the requirements of the regime. Methodologically, the study is based on a set of modern general scientific and private legal methods. It is based on the comparative legal method, which allows to compare the norms of criminal legislation of Russia, CIS countries, European and other states. The author draws attention to the existing differences and notes that some provisions of the modern Russian criminal law do not meet the requirements of the time, modern challenges and threats of the global information society in the conditions of building a digital economy. Based on a generalized analysis of the regulatory legal framework of the Russian Federation, foreign states and existing doctrinal views, the conclusion is made about the unsystematic composition of crimes aimed at violating the legal regime of secrets protected by law and other information of limited access. The author additionally notes the need to separate eco-nomic espionage and intentional disclosure of trade secrets into separate corpus delicti. Certain shortcomings of the Russian criminal law is the lack of corpus delicti aimed at the protection of legal regimes of professional and official secrets, personal data. Taking into account changes in the attitude to the institution of adoption and global trends in the protec-tion of children's rights, the possibility of decriminalisation of the disclosure of adoption secrets in modern conditions has been argued. The above and other problems identified in Russian criminal law with regard to the protection of legally established secrets and other restricted information are aimed at improving criminal legislation.


2021 ◽  
Author(s):  
Romain Cadario ◽  
Chiara Longoni ◽  
Carey K Morewedge

Medical artificial intelligence is cost-effective, scalable, and often outperforms human providers. One important barrier to its adoption is the perception that algorithms are a “black box”—people do not subjectively understand how algorithms make medical decisions, and we find this impairs their utilization. We argue a second barrier is that people also overestimate their objective understanding of medical decisions made by human healthcare providers. In five pre- registered experiments with convenience and nationally representative samples (N = 2,699), we find that people exhibit such an illusory understanding of human medical decision making (Study 1). This leads people to claim greater understanding of decisions made by human than algorithmic healthcare providers (Studies 2A-B), which makes people more reluctant to utilize algorithmic providers (Studies 3A-B). Fortunately, we find that asking people to explain the mechanisms underlying medical decision making reduces this illusory gap in subjective understanding (Study 1). Moreover, we test brief interventions that, by increasing subjective understanding of algorithmic decision processes, increase willingness to utilize algorithmic healthcare providers without undermining utilization of human providers (Studies 3A-B). Corroborating these results, a study on Google testing ads for an algorithmic skin cancer detection app shows that interventions that increase subjective understanding of algorithmic decision processes lead to a higher ad click-through rate (Study 4). Our findings show how reluctance to utilize medical algorithms is driven both by the difficulty of understanding algorithms, and an illusory understanding of human decision making.


2008 ◽  
Vol 5 (1) ◽  
pp. 81-88
Author(s):  
Philip Berry

When life-threatening illness robs a patient of the ability to express their desires, medical personnel must work through the issues of management and prognosis with relatives. Management decisions are guided by medical judgement and the relatives’ account of the patient’s wishes, but difficulties occur when distance grows between these two factors. In these circumstances the counselling process may turn into a doctor-led justification of the medical decision. This article presents two strands of dialogue, in which a doctor, counselling for and against continuation of supportive treatment in two patients with liver failure, demonstrates selectivity and inconsistency in constructing an argument. The specific issues of loss of consciousness (with obscuration of personal identity), statistical ‘futility’ and removal of autonomy are explored and used to bolster diametrically opposed medical decisions. By examining the doctor’s ability to interpret these issues according to circumstance, the author demonstrates how it is possible to shade medical facts depending on the desired outcome.


2020 ◽  
Vol 9 ◽  
pp. 99-104
Author(s):  
E. V. Markovicheva ◽  

In the 21st century, the concept of restorative justice has become widespread in criminal proceedings. The introduction of special compromise procedures into the criminal process allows for the restoration of the rights of the victim and reduces the level of repression in the criminal justice system. The traditional system of punishment is considered ineffective, not conducive to the purpose of compensating for harm caused by the crime. Restorative justice enables the accused to compensate for the harm caused by the crime and is oriented not towards their social isolation, but towards further positive socialization. The introduction of the ideas of restorative justice into the Russian criminal process requires the introduction of special conciliation procedures. The purpose of the article is to reveal promising directions for introducing special conciliation procedures into the Russian criminal process. The use of the formal legal method provided an analysis of the norms of criminal procedure legislation and the practice of its application. Comparative legal analysis revealed common features in the development of models of restorative justice in modern states. Conclusions. The introduction of conciliation procedures into the Russian criminal process is in line with the concept of its humanization and reduction of the level of criminal repression. The consolidation of the mediator»s procedural status and the mediation procedure in the criminal procedure legislation will make it possible to put into practice the elements of restorative justice.


2020 ◽  
Author(s):  
Weihua Yang ◽  
Bo Zheng ◽  
Maonian Wu ◽  
Shaojun Zhu ◽  
Hongxia Zhou ◽  
...  

BACKGROUND Artificial intelligence (AI) is widely applied in the medical field, especially in ophthalmology. In the development of ophthalmic artificial intelligence, some problems worthy of attention have gradually emerged, among which the ophthalmic AI-related recognition issues are particularly prominent. That is to say, currently, there is a lack of research into people's familiarity with and their attitudes toward ophthalmic AI. OBJECTIVE This survey aims to assess medical workers’ and other professional technicians’ familiarity with AI, as well as their attitudes toward and concerns of ophthalmic AI. METHODS An electronic questionnaire was designed through the Questionnaire Star APP, an online survey software and questionnaire tool, and was sent to relevant professional workers through Wechat, China’s version of Facebook or WhatsApp. The participation was based on a voluntary and anonymous principle. The questionnaire mainly consisted of four parts, namely the participant’s background, the participant's basic understanding of AI, the participant's attitude toward AI, and the participant's concerns about AI. A total of 562 participants were counted, with 562 valid questionnaires returned. The results of the questionnaires are displayed in an Excel 2003 form. RESULTS A total of 562 professional workers completed the questionnaire, of whom 291 were medical workers and 271 were other professional technicians. About 37.9% of the participants understood AI, and 31.67% understood ophthalmic AI. The percentages of people who understood ophthalmic AI among medical workers and other professional technicians were about 42.61% and 15.6%, respectively. About 66.01% of the participants thought that ophthalmic AI would partly replace doctors, with about 59.07% still having a relatively high acceptance level of ophthalmic AI. Meanwhile, among those with ophthalmic AI application experiences (30.6%), respectively about 84.25% of medical professionals and 73.33% of other professional technicians held a full acceptance attitude toward ophthalmic AI. The participants expressed concerns that ophthalmic AI might bring about issues such as the unclear definition of medical responsibilities, the difficulty of ensuring service quality, and the medical ethics risks. And among the medical workers and other professional technicians who understood ophthalmic AI, 98.39%, and 95.24%, respectively, said that there was a need to increase the study of medical ethics issues in the ophthalmic AI field. CONCLUSIONS Analysis of the questionnaire results shows that the medical workers have a higher understanding level of ophthalmic AI than other professional technicians, making it necessary to popularize ophthalmic AI education among other professional technicians. Most of the participants did not have any experience in ophthalmic AI, but generally had a relatively high acceptance level of ophthalmic AI, believing that doctors would partly be replaced by it and that there was a need to strengthen research into medical ethics issues of the field.


Author(s):  
Jessica Flanigan

Though rights of self-medication needn’t change medical decision-making for most patients, rights of self-medication have the potential to transform other aspects of healthcare as it is currently practiced. For example, if public officials respected patient’s authority to make medical decisions without authorization from a regulator or a physician, then they should also respect patient’s authority to choose to use unauthorized medical devices and medical providers. And many of the same reasons in favor of rights of self-medication and against prohibitive regulations are also reasons to support patient’s rights to access information about pharmaceuticals, including pharmaceutical advertisements. Rights of self-medication may also call for revisions to existing standards of product liability and prompt officials to rethink justifications for the public provision of healthcare.


Author(s):  
Jessica Berg ◽  
Emma Cave

This chapter discusses patient autonomy, capacity, and consent involving children. It first provides a general overview of children’s rights with respect to making medical decisions in both the United States and Europe. The chapter then discusses the best interests standard (which is usually applied in cases of minors) and how to consider capacity in the context of children. In the discussions of European approaches, the chapter covers relevant international and regional human rights law. The jurisdiction of England and Wales are used as examples. The chapter also provides a general overview of US state approaches and federal law. The chapter concludes by noting some new areas of medical decision-making which challenge the traditional models.


Sign in / Sign up

Export Citation Format

Share Document