Hybrid Artificial Intelligence and IoT in Health care for Cardiovascular Patient in Decision-Making System

Author(s):  
M. Safa ◽  
A. Pandian ◽  
T. Kartick ◽  
K. Chakrapani ◽  
G. Geetha ◽  
...  
Author(s):  
Abraham Rudnick

Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.


1996 ◽  
Vol 1 (3) ◽  
pp. 175-178 ◽  
Author(s):  
Colin Gordon

Expert systems to support medical decision-making have so far achieved few successes. Current technical developments, however, may overcome some of the limitations. Although there are several theoretical currents in medical artificial intelligence, there are signs of them converging. Meanwhile, decision support systems, which set themselves more modest goals than replicating or improving on clinicians' expertise, have come into routine use in places where an adequate electronic patient record exists. They may also be finding a wider role, assisting in the implementation of clinical practice guidelines. There is, however, still much uncertainty about the kinds of decision support that doctors and other health care professionals are likely to want or accept.


Author(s):  
Bazzi Mehdi ◽  
Chamlal Hasna ◽  
El Kharroubi Ahmed ◽  
Ouaderhman Tayeb

Promoting entrepreneurship in Morocco among young people has been a challenge for some years of economic and social problems, especially after the events of the Arab Spring. Several programs have been set up by the government for young entrepreneurs. Thus, faced with the large number of credit applications solicited by these young entrepreneurs, banks are obliged to resort to artificial intelligence techniques. For this purpose, the aim of this article is to propose a decision-making system enabling the bank to automate its credit granting process. It is a tool that allows the bank, in the first instance, to select promising projects through a scoring approach adapted to this segment of young entrepreneurs. In a second step, the tool allows the setting of the maximum credit amount to be allocated to the selected project. Finally, based on the knowledge of the bank's experts, the tool proposes a breakdown of the amount granted by the bank into several products adapted to the needs of the entrepreneur.


2019 ◽  
Vol 162 (1) ◽  
pp. 38-39
Author(s):  
Alexandra M. Arambula ◽  
Andrés M. Bur

Artificial intelligence (AI) is quickly expanding within the sphere of health care, offering the potential to enhance the efficiency of care delivery, diminish costs, and reduce diagnostic and therapeutic errors. As the field of otolaryngology also explores use of AI technology in patient care, a number of ethical questions warrant attention prior to widespread implementation of AI. This commentary poses many of these ethical questions for consideration by the otolaryngologist specifically, using the 4 pillars of medical ethics—autonomy, beneficence, nonmaleficence, and justice—as a framework and advocating both for the assistive role of AI in health care and for the shared decision-making, empathic approach to patient care.


The COVID-19 pandemic has been causing a massive strain in different sectors around the globe, especially in the health care systems in many countries. Artificial Intelligence has found its way in the health care system in helping to find a cure or vaccine by screening out medicines that could be promising for cure. Not only that but by containing the virus and predicting highly effected areas and limiting the spread of the virus. Many use cases based on AI was successful to monitor the spread and lock areas that were predicted by AI algorithms to be at high risk. Broadly speaking, AI involves ‘the ability of machines to emulate human thinking, reasoning and decision - making.


Author(s):  
V. G. Nikitaev ◽  
A. N. Pronichev ◽  
O. B. Tamrazova ◽  
V. Yu. Sergeev ◽  
E. A. Druzhinina ◽  
...  

2020 ◽  
Author(s):  
Thomas Ploug ◽  
Anna Sundby ◽  
Thomas B Moeslund ◽  
Søren Holm

BACKGROUND Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. OBJECTIVE This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.


Sign in / Sign up

Export Citation Format

Share Document