scholarly journals Building Resilience against COVID-19 Pandemic Using Artificial Intelligence, Machine Learning, and IoT: A Survey of Recent Progress

IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 506-528
Author(s):  
S. M. Abu Adnan Abir ◽  
Shama Naz Islam ◽  
Adnan Anwar ◽  
Abdun Naser Mahmood ◽  
Aman Maung Than Oo

Coronavirus disease 2019 (COVID-19) has significantly impacted the entire world today and stalled off regular human activities in such an unprecedented way that it will have an unforgettable footprint on the history of mankind. Different countries have adopted numerous measures to build resilience against this life-threatening disease. However, the highly contagious nature of this pandemic has challenged the traditional healthcare and treatment practices. Thus, artificial intelligence (AI) and machine learning (ML) open up new mechanisms for effective healthcare during this pandemic. AI and ML can be useful for medicine development, designing efficient diagnosis strategies and producing predictions of the disease spread. These applications are highly dependent on real-time monitoring of the patients and effective coordination of the information, where the Internet of Things (IoT) plays a key role. IoT can also help with applications such as automated drug delivery, responding to patient queries, and tracking the causes of disease spread. This paper represents a comprehensive analysis of the potential AI, ML, and IoT technologies for defending against the COVID-19 pandemic. The existing and potential applications of AI, ML, and IoT, along with a detailed analysis of the enabling tools and techniques are outlined. A critical discussion on the risks and limitations of the aforementioned technologies are also included.

Cancers ◽  
2021 ◽  
Vol 13 (19) ◽  
pp. 4740
Author(s):  
Fabiano Bini ◽  
Andrada Pica ◽  
Laura Azzimonti ◽  
Alessandro Giusti ◽  
Lorenzo Ruinelli ◽  
...  

Artificial intelligence (AI) uses mathematical algorithms to perform tasks that require human cognitive abilities. AI-based methodologies, e.g., machine learning and deep learning, as well as the recently developed research field of radiomics have noticeable potential to transform medical diagnostics. AI-based techniques applied to medical imaging allow to detect biological abnormalities, to diagnostic neoplasms or to predict the response to treatment. Nonetheless, the diagnostic accuracy of these methods is still a matter of debate. In this article, we first illustrate the key concepts and workflow characteristics of machine learning, deep learning and radiomics. We outline considerations regarding data input requirements, differences among these methodologies and their limitations. Subsequently, a concise overview is presented regarding the application of AI methods to the evaluation of thyroid images. We developed a critical discussion concerning limits and open challenges that should be addressed before the translation of AI techniques to the broad clinical use. Clarification of the pitfalls of AI-based techniques results crucial in order to ensure the optimal application for each patient.


2021 ◽  
Vol 12 (1) ◽  
pp. 101-112
Author(s):  
Kishore Sugali ◽  
Chris Sprunger ◽  
Venkata N Inukollu

The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has been an increase in popularity for applications that implement AI and ML technology. As with traditional development, software testing is a critical component of an efficient AI/ML application. However, the approach to development methodology used in AI/ML varies significantly from traditional development. Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For future research, this study has key implications. Each of the challenges outlined in this paper is ideal for further investigation and has great potential to shed light on the way to more productive software testing strategies and methodologies that can be applied to AI/ML applications.


2021 ◽  
Vol 3 (4) ◽  
pp. 900-921
Author(s):  
Mi-Young Kim ◽  
Shahin Atakishiyev ◽  
Housam Khalifa Bashier Babiker ◽  
Nawshad Farruque ◽  
Randy Goebel ◽  
...  

The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.


2022 ◽  
pp. 71-85
Author(s):  
Satvik Tripathi ◽  
Thomas Heinrich Musiolik

Artificial intelligence has a huge array of current and potential applications in healthcare and medicine. Ethical issues arising due to algorithmic biases are one of the greatest challenges faced in the generalizability of AI models today. The authors address safety and regulatory barriers that impede data sharing in medicine as well as potential changes to existing techniques and frameworks that might allow ethical data sharing for machine learning. With these developments in view, they also present different algorithmic models that are being used to develop machine learning-based medical systems that will potentially evolve to be free of the sample, annotator, and temporal bias. These AI-based medical imaging models will then be completely implemented in healthcare facilities and institutions all around the world, even in the remotest areas, making diagnosis and patient care both cheaper and freely accessible.


Author(s):  
Edmund T. Rolls

The subject of this book is how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed. The book will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics.


2021 ◽  
Vol 40 (4) ◽  
pp. 298-301
Author(s):  
Tariq Alkhalifah ◽  
Ali Almomin ◽  
Ali Naamani

Artificial intelligence (AI), specifically machine learning (ML), has emerged as a powerful tool to address many of the challenges we face as we try to illuminate the earth and make the proper prediction of its content. From fault detection, to salt boundary mapping, to image resolution enhancements, the quest to teach our computing devices how to perform these tasks accurately, as well as quantify the accuracy, has become a feasible and sought-after objective. Recent advances in ML algorithms and availability of the modules to apply such algorithms enabled geoscientists to focus on potential applications of such tools. As a result, we held the virtual workshop, Artificially Intelligent Earth Exploration Workshop: Teaching the Machine How to Characterize the Subsurface, 23–26 November 2020.


2021 ◽  
Author(s):  
Yew Kee Wong

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, aswell as the opportunities provided by the AI applications in various decision making domains.


BMC Medicine ◽  
2019 ◽  
Vol 17 (1) ◽  
Author(s):  
Christopher J. Kelly ◽  
Alan Karthikesalingam ◽  
Mustafa Suleyman ◽  
Greg Corrado ◽  
Dominic King

Abstract Background Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. Main body Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. Conclusion The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e12073
Author(s):  
Indira Mikkili ◽  
Abraham Peele Karlapudi ◽  
T. C. Venkateswarulu ◽  
Vidya Prabhakar Kodali ◽  
Deepika Sri Singh Macamdas ◽  
...  

The coronavirus disease (COVID-19) pandemic has caused havoc worldwide. The tests currently used to diagnose COVID-19 are based on real time reverse transcription polymerase chain reaction (RT-PCR), computed tomography medical imaging techniques and immunoassays. It takes 2 days to obtain results from the RT-PCR test and also shortage of test kits creating a requirement for alternate and rapid methods to accurately diagnose COVID-19. Application of artificial intelligence technologies such as the Internet of Things, machine learning tools and big data analysis to COVID-19 diagnosis could yield rapid and accurate results. The neural networks and machine learning tools can also be used to develop potential drug molecules. Pharmaceutical companies face challenges linked to the costs of drug molecules, research and development efforts, reduced efficiency of drugs, safety concerns and the conduct of clinical trials. In this review, relevant features of artificial intelligence and their potential applications in COVID-19 diagnosis and drug development are highlighted.


Author(s):  
Antonella Petrillo ◽  
Marta Travaglioni ◽  
Fabio De Felice ◽  
Raffaele Cioffi ◽  
Giuseppina Piscitelli

The history of Artificial Intelligence (AI) development dates to the 40s. The researchers showed strong expectations until the 70s, when they began to encounter serious difficulties and investments were greatly, reduced. With the introduction of the Industry 4.0, one of the techniques adopted for AI implementation is Machine Learning (ML) that focuses on the machines ability to receive data series and learn on their own. Given the considerable importance of the subject, researchers have completed many studies on ML to ensure that machines are able to replace or relieve human tasks. This research aims to analyze, systematically, the literature on several aspects, including publication year, authors, scientific sector, country, institution, keywords. Analyzing existing literature on AI is a necessary stage to recommend policy on the matter. The analysis has been done using Web of Science and SCOPUS database. Furthermore, UCINET and NVivo 12 software have been used to complete them. Literature review on ML and AI empirical studies published in the last century was carried out to highlight the evolution of the topic before and after Industry 4.0 introduction, from 1999 to now. Eighty-two articles were reviewed and classified. A first interesting result is the greater number of works published by USA and the increasing interest after the birth of Industry 4.0.


Sign in / Sign up

Export Citation Format

Share Document