scholarly journals Artificial Intelligence-Based Polyp Detection in Colonoscopy: Where Have We Been, Where Do We Stand, and Where Are We Headed?

2020 ◽  
Vol 36 (6) ◽  
pp. 428-438
Author(s):  
Thomas Wittenberg ◽  
Martin Raithel

<b><i>Background:</i></b> In the past, image-based computer-assisted diagnosis and detection systems have been driven mainly from the field of radiology, and more specifically mammography. Nevertheless, with the availability of large image data collections (known as the “Big Data” phenomenon) in correlation with developments from the domain of artificial intelligence (AI) and particularly so-called deep convolutional neural networks, computer-assisted detection of adenomas and polyps in real-time during screening colonoscopy has become feasible. <b><i>Summary:</i></b> With respect to these developments, the scope of this contribution is to provide a brief overview about the evolution of AI-based detection of adenomas and polyps during colonoscopy of the past 35 years, starting with the age of “handcrafted geometrical features” together with simple classification schemes, over the development and use of “texture-based features” and machine learning approaches, and ending with current developments in the field of deep learning using convolutional neural networks. In parallel, the need and necessity of large-scale clinical data will be discussed in order to develop such methods, up to commercially available AI products for automated detection of polyps (adenoma and benign neoplastic lesions). Finally, a short view into the future is made regarding further possibilities of AI methods within colonoscopy. <b><i>Key Messages:</i></b> Research<b><i></i></b>of<b><i></i></b>image-based lesion detection in colonoscopy data has a 35-year-old history. Milestones such as the Paris nomenclature, texture features, big data, and deep learning were essential for the development and availability of commercial AI-based systems for polyp detection.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lara Lloret Iglesias ◽  
Pablo Sanz Bellón ◽  
Amaia Pérez del Barrio ◽  
Pablo Menéndez Fernández-Miranda ◽  
David Rodríguez González ◽  
...  

AbstractDeep learning is nowadays at the forefront of artificial intelligence. More precisely, the use of convolutional neural networks has drastically improved the learning capabilities of computer vision applications, being able to directly consider raw data without any prior feature extraction. Advanced methods in the machine learning field, such as adaptive momentum algorithms or dropout regularization, have dramatically improved the convolutional neural networks predicting ability, outperforming that of conventional fully connected neural networks. This work summarizes, in an intended didactic way, the main aspects of these cutting-edge techniques from a medical imaging perspective.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

&lt;p&gt;Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model&amp;#8217;s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.&lt;/p&gt;


2019 ◽  
Vol 491 (2) ◽  
pp. 2280-2300 ◽  
Author(s):  
Kaushal Sharma ◽  
Ajit Kembhavi ◽  
Aniruddha Kembhavi ◽  
T Sivarani ◽  
Sheelu Abraham ◽  
...  

ABSTRACT Due to the ever-expanding volume of observed spectroscopic data from surveys such as SDSS and LAMOST, it has become important to apply artificial intelligence (AI) techniques for analysing stellar spectra to solve spectral classification and regression problems like the determination of stellar atmospheric parameters Teff, $\rm {\log g}$, and [Fe/H]. We propose an automated approach for the classification of stellar spectra in the optical region using convolutional neural networks (CNNs). Traditional machine learning (ML) methods with ‘shallow’ architecture (usually up to two hidden layers) have been trained for these purposes in the past. However, deep learning methods with a larger number of hidden layers allow the use of finer details in the spectrum which results in improved accuracy and better generalization. Studying finer spectral signatures also enables us to determine accurate differential stellar parameters and find rare objects. We examine various machine and deep learning algorithms like artificial neural networks, Random Forest, and CNN to classify stellar spectra using the Jacoby Atlas, ELODIE, and MILES spectral libraries as training samples. We test the performance of the trained networks on the Indo-U.S. Library of Coudé Feed Stellar Spectra (CFLIB). We show that using CNNs, we are able to lower the error up to 1.23 spectral subclasses as compared to that of two subclasses achieved in the past studies with ML approach. We further apply the trained model to classify stellar spectra retrieved from the SDSS data base with SNR &gt; 20.


2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Francesco Faita

In the last few years, artificial intelligence (AI) technology has grown dramatically impacting several fields of human knowledge and medicine in particular. Among other approaches, deep learning, which is a subset of AI based on specific computational models, such as deep convolutional neural networks and recurrent neural networks, has shown exceptional performance in images and signals processing. Accordingly, emergency medicine will benefit from the adoption of this technology. However, a particular attention should be devoted to the review of these papers in order to exclude overoptimistic results from clinically transferable ones. We presented a group of studies recently published on PubMed and selected by keywords ‘deep learning emergency medicine’ and ‘artificial intelligence emergency medicine’ with the aim of highlighting their methodological strengths and weaknesses, as well as their clinical usefulness.


With the evolution of artificial intelligence to deep learning, the age of perspicacious machines has pioneered that can even mimic as a human. A Conversational software agent is one of the best-suited examples of such intuitive machines which are also commonly known as chatbot actuated with natural language processing. The paper enlisted some existing popular chatbots along with their details, technical specifications, and functionalities. Research shows that most of the customers have experienced penurious service. Also, the inception of meaningful cum instructive feedback endure a demanding and exigent assignment as enactment for chatbots builtout reckon mostly upon templates and hand-written rules. Current chatbot models lack in generating required responses and thus contradict the quality conversation. So involving deep learning amongst these models can overcome this lack and can fill up the paucity with deep neural networks. Some of the deep Neural networks utilized for this till now are Stacked Auto-Encoder, sparse auto-encoders, predictive sparse and denoising auto-encoders. But these DNN are unable to handle big data involving large amounts of heterogeneous data. While Tensor Auto Encoder which overcomes this drawback is time-consuming. This paper has proposed the Chatbot to handle the big data in a manageable time.


2020 ◽  
Author(s):  
Saki Aoto ◽  
Mayumi Hangai ◽  
Hitomi Ueno-Yokohata ◽  
Aki Ueda ◽  
Maki Igarashi ◽  
...  

AbstractDeep learning has rapidly been filtrating many aspects of human lives. In particular, image recognition by convolutional neural networks has inspired numerous studies in this area. Hardware and software technologies as well as large quantities of data have contributed to the drastic development of the field. However, the application of deep learning is often hindered by the need for big data and the laborious manual annotation thereof. To experience deep learning using the data compiled by us, we collected 2429 constrained headshot images of 277 volunteers. The collection of face photographs is challenging in terms of protecting personal information; we established an online procedure in which both the informed consent and image data could be obtained. We did not collect personal information, but issued agreement numbers to deal with withdrawal requests. Gender and smile labels were manually and subjectively annotated only from the appearances, and final labels were determined by majority among our team members. Rotated, trimmed, resolution-reduced, decolorized, and matrix-formed data were allowed to be publicly released. Moreover, simplified feature vectors for data sciences were released. We performed gender recognition by building convolutional neural networks based on the Inception V3 model with pre-trained ImageNet data to demonstrate the usefulness of our dataset.


Deep convolutional neural networks (CNN) have attracted many attentions of researchers in the field of artificial intelligence. Based on several well-known architectures, more researchers and designers have joined the field of applying deep learning and devising a large number of CNNs for processing datasets of interesting. Equipped with modern audio, video, screen-touching components, and other sensors for online pattern recognition, the iOS mobile devices provide developers and users friendly testing and powerful computing environments. This chapter introduces the trend of developing pattern recognition CNN Apps on iOS devices and the neural organization of convolutional neural networks. Deep learning in Matlab and executing CNN models on iOS devices are introduced following the motivation of combining mathematical modelling and computation with neural architectures for developing pattern recognition iOS apps. This chapter also gives contexts of discussing typical hidden layers in the CNN architecture.


2021 ◽  
pp. PP. 18-50
Author(s):  
Ahmed A. Elngar ◽  
◽  
◽  
◽  
◽  
...  

Computer vision is one of the fields of computer science that is one of the most powerful and persuasive types of artificial intelligence. It is similar to the human vision system, as it enables computers to recognize and process objects in pictures and videos in the same way as humans do. Computer vision technology has rapidly evolved in many fields and contributed to solving many problems, as computer vision contributed to self-driving cars, and cars were able to understand their surroundings. The cameras record video from different angles around the car, then a computer vision system gets images from the video, and then processes the images in real-time to find roadside ends, detect other cars, and read traffic lights, pedestrians, and objects. Computer vision also contributed to facial recognition; this technology enables computers to match images of people’s faces to their identities. which these algorithms detect facial features in images and then compare them with databases. Computer vision also play important role in Healthcare, in which algorithms can help automate tasks such as detecting Breast cancer, finding symptoms in x-ray, cancerous moles in skin images, and MRI scans. Computer vision also contributed to many fields such as image classification, object discovery, motion recognition, subject tracking, and medicine. The rapid development of artificial intelligence is making machine learning more important in his field of research. Use algorithms to find out every bit of data and predict the outcome. This has become an important key to unlocking the door to AI. If we had looked to deep learning concept, we find deep learning is a subset of machine learning, algorithms inspired by structure and function of the human brain called artificial neural networks, learn from large amounts of data. Deep learning algorithm perform a task repeatedly, each time tweak it a little to improve the outcome. So, the development of computer vision was due to deep learning. Now we'll take a tour around the convolution neural networks, let us say that convolutional neural networks are one of the most powerful supervised deep learning models (abbreviated as CNN or ConvNet). This name ;convolutional ; is a token from a mathematical linear operation between matrixes called convolution. CNN structure can be used in a variety of real-world problems including, computer vision, image recognition, natural language processing (NLP), anomaly detection, video analysis, drug discovery, recommender systems, health risk assessment, and time-series forecasting. If we look at convolutional neural networks, we see that CNN are similar to normal neural networks, the only difference between CNN and ANN is that CNNs are used in the field of pattern recognition within images mainly. This allows us to encode the features of an image into the structure, making the network more suitable for image-focused tasks, with reducing the parameters required to set-up the model. One of the advantages of CNN that it has an excellent performance in machine learning problems. So, we will use CNN as a classifier for image classification. So, the objective of this paper is that we will talk in detail about image classification in the following sections.


Author(s):  
Tim Hulsen

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines, using machine learning, deep learning and neural networks. AI enables machines to learn from experience and perform human-like tasks. The field of AI research has been developing fast over the past five to ten years, due to the rise of &lsquo;big data&rsquo; and increasing computing power. In the medical area, AI can be used to improve diagnosis, prognosis, treatment, surgery, drug discovery, or for other applications. Therefore, both academia and industry are investing a lot in AI. This review investigates the biomedical literature (in the PubMed and Embase databases) by looking at bibliographical data, observing trends over time and occurrences of keywords. Some observations are made: AI has been growing exponentially over the past few years; it is used mostly for diagnosis; COVID-19 is already in the top-5 of diseases studied using AI; the United States, China, United Kingdom, South Korea and Canada are publishing the most articles in AI research; MIT is the world&rsquo;s leading university in AI research; and convolutional neural networks are by far the most popular deep learning algorithms at this moment. These trends could be studied in more detail, by studying more literature databases or by including patent databases. More advanced analyses could be used to predict in which direction AI will develop over the coming years. The expectation is that AI will keep on growing, in spite of stricter privacy laws, more need for standardization, bias in the data, and the need for building trust.


Sign in / Sign up

Export Citation Format

Share Document