scholarly journals Literature Analysis of Artificial Intelligence in Biomedicine

Author(s):  
Tim Hulsen

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines, using machine learning, deep learning and neural networks. AI enables machines to learn from experience and perform human-like tasks. The field of AI research has been developing fast over the past five to ten years, due to the rise of ‘big data’ and increasing computing power. In the medical area, AI can be used to improve diagnosis, prognosis, treatment, surgery, drug discovery, or for other applications. Therefore, both academia and industry are investing a lot in AI. This review investigates the biomedical literature (in the PubMed and Embase databases) by looking at bibliographical data, observing trends over time and occurrences of keywords. Some observations are made: AI has been growing exponentially over the past few years; it is used mostly for diagnosis; COVID-19 is already in the top-5 of diseases studied using AI; the United States, China, United Kingdom, South Korea and Canada are publishing the most articles in AI research; MIT is the world’s leading university in AI research; and convolutional neural networks are by far the most popular deep learning algorithms at this moment. These trends could be studied in more detail, by studying more literature databases or by including patent databases. More advanced analyses could be used to predict in which direction AI will develop over the coming years. The expectation is that AI will keep on growing, in spite of stricter privacy laws, more need for standardization, bias in the data, and the need for building trust.

Subject Prospect for artificial intelligence applications. Significance Artificial intelligence (AI) technologies, particularly those using 'deep learning', have in the past five years helped to automate many tasks previously outside the capabilities of computers. There are signs that the feverish pace of progress seen recently is slowing. Impacts Western legislation will make companies responsible for preventing decisions based on biased AI. Advances in 'explainable AI' will be rapid. China will be a major research player in AI technologies, alongside the United States, Japan and Europe.


2018 ◽  
Vol 9 (1) ◽  
pp. 15-18
Author(s):  
Megha Gupta1 ◽  
Jitender Rai

This paper represented on the Deep learning technique growing in the learning community of machines, as traditional learning architecture has proven incompetent for the machine learning challenging tasks and strong feature of artificial intelligence (AI). Increasing and widespread availability of computing power, along the use of efficient training and improvement algorithms, has made it possible to implement, until then, the concept of deep learning. These development events deep learning architecture and algorithms look at cognitive neuroscience and point to biologically inspired solutions for learning. This paper represented on the rule of Convolutional Neural Networks (CNNs), Neural Networks (SNNs) and Hierarchical Temporary Memory (HTM), and other related techniques to the least mature technique.


2020 ◽  
Vol 36 (6) ◽  
pp. 428-438
Author(s):  
Thomas Wittenberg ◽  
Martin Raithel

<b><i>Background:</i></b> In the past, image-based computer-assisted diagnosis and detection systems have been driven mainly from the field of radiology, and more specifically mammography. Nevertheless, with the availability of large image data collections (known as the “Big Data” phenomenon) in correlation with developments from the domain of artificial intelligence (AI) and particularly so-called deep convolutional neural networks, computer-assisted detection of adenomas and polyps in real-time during screening colonoscopy has become feasible. <b><i>Summary:</i></b> With respect to these developments, the scope of this contribution is to provide a brief overview about the evolution of AI-based detection of adenomas and polyps during colonoscopy of the past 35 years, starting with the age of “handcrafted geometrical features” together with simple classification schemes, over the development and use of “texture-based features” and machine learning approaches, and ending with current developments in the field of deep learning using convolutional neural networks. In parallel, the need and necessity of large-scale clinical data will be discussed in order to develop such methods, up to commercially available AI products for automated detection of polyps (adenoma and benign neoplastic lesions). Finally, a short view into the future is made regarding further possibilities of AI methods within colonoscopy. <b><i>Key Messages:</i></b> Research<b><i></i></b>of<b><i></i></b>image-based lesion detection in colonoscopy data has a 35-year-old history. Milestones such as the Paris nomenclature, texture features, big data, and deep learning were essential for the development and availability of commercial AI-based systems for polyp detection.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Author(s):  
Ruofan Liao ◽  
Paravee Maneejuk ◽  
Songsak Sriboonchitta

In the past, in many areas, the best prediction models were linear and nonlinear parametric models. In the last decade, in many application areas, deep learning has shown to lead to more accurate predictions than the parametric models. Deep learning-based predictions are reasonably accurate, but not perfect. How can we achieve better accuracy? To achieve this objective, we propose to combine neural networks with parametric model: namely, to train neural networks not on the original data, but on the differences between the actual data and the predictions of the parametric model. On the example of predicting currency exchange rate, we show that this idea indeed leads to more accurate predictions.


Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.


2021 ◽  
Vol 20 ◽  
pp. 153303382110163
Author(s):  
Danju Huang ◽  
Han Bai ◽  
Li Wang ◽  
Yu Hou ◽  
Lan Li ◽  
...  

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.


Sign in / Sign up

Export Citation Format

Share Document