scholarly journals The augmented radiologist: artificial intelligence in the practice of radiology

Author(s):  
Erich Sorantin ◽  
Michael G. Grasser ◽  
Ariane Hemmelmayr ◽  
Sebastian Tschauner ◽  
Franko Hrzic ◽  
...  

AbstractIn medicine, particularly in radiology, there are great expectations in artificial intelligence (AI), which can “see” more than human radiologists in regard to, for example, tumor size, shape, morphology, texture and kinetics — thus enabling better care by earlier detection or more precise reports. Another point is that AI can handle large data sets in high-dimensional spaces. But it should not be forgotten that AI is only as good as the training samples available, which should ideally be numerous enough to cover all variants. On the other hand, the main feature of human intelligence is content knowledge and the ability to find near-optimal solutions. The purpose of this paper is to review the current complexity of radiology working places, to describe their advantages and shortcomings. Further, we give an AI overview of the different types and features as used so far. We also touch on the differences between AI and human intelligence in problem-solving. We present a new AI type, labeled “explainable AI,” which should enable a balance/cooperation between AI and human intelligence — thus bringing both worlds in compliance with legal requirements. For support of (pediatric) radiologists, we propose the creation of an AI assistant that augments radiologists and keeps their brain free for generic tasks.

2020 ◽  
Vol 24 (01) ◽  
pp. 003-011 ◽  
Author(s):  
Narges Razavian ◽  
Florian Knoll ◽  
Krzysztof J. Geras

AbstractArtificial intelligence (AI) has made stunning progress in the last decade, made possible largely due to the advances in training deep neural networks with large data sets. Many of these solutions, initially developed for natural images, speech, or text, are now becoming successful in medical imaging. In this article we briefly summarize in an accessible way the current state of the field of AI. Furthermore, we highlight the most promising approaches and describe the current challenges that will need to be solved to enable broad deployment of AI in clinical practice.


2021 ◽  
Vol 2 (2) ◽  
pp. 19-33
Author(s):  
Adam Urban ◽  
David Hick ◽  
Joerg Rainer Noennig ◽  
Dietrich Kammer

Exploring the phenomenon of artificial intelligence (AI) applications in urban planning and governance, this article reviews most current smart city developments and outlines the future potential of AI, especially in the context of participatory urban design. It concludes that especially the algorithmic analysis and synthesis of large data sets generated by massive user participation projects present a beneficial field of application that enables better design decision making, project validation, and evaluation.


2020 ◽  
Vol 19 (6) ◽  
pp. 133-144
Author(s):  
A.A. Ivshin ◽  
◽  
A.V. Gusev ◽  
R.E. Novitskiy ◽  
◽  
...  

Artificial intelligence (AI) has recently become an object of interest for specialists from various fields of science and technology, including healthcare professionals. Significantly increased funding for the development of AI models confirms this fact. Advances in machine learning (ML), availability of large data sets, and increasing processing power of computers promote the implementation of AI in many areas of human activity. Being a type of AI, machine learning allows automatic development of mathematical models using large data sets. These models can be used to address multiple problems, such as prediction of various events in obstetrics and neonatology. Further integration of artificial intelligence in perinatology will facilitate the development of this important area in the future. This review covers the main aspects of artificial intelligence and machine learning, their possible application in healthcare, potential limitations and problems, as well as outlooks in the context of AI integration into perinatal medicine. Key words: artificial intelligence, cardiotocography, neonatal asphyxia, fetal congenital abnormalities, fetal hypoxia, machine learning, neural networks, prediction, prognosis, perinatal risk, prenatal diagnosis


2020 ◽  
Vol 58 (6) ◽  
Author(s):  
Daniel D. Rhoads

ABSTRACT Artificial intelligence (AI) is increasingly becoming an important component of clinical microbiology informatics. Researchers, microbiologists, laboratorians, and diagnosticians are interested in AI-based testing because these solutions have the potential to improve a test’s turnaround time, quality, and cost. A study by Mathison et al. used computer vision AI (B. A. Mathison, J. L. Kohan, J. F. Walker, R. B. Smith, et al., J Clin Microbiol 58:e02053-19, 2020, https://doi.org/10.1128/JCM.02053-19), but additional opportunities for AI applications exist within the clinical microbiology laboratory. Large data sets within clinical microbiology that are amenable to the development of AI diagnostics include genomic information from isolated bacteria, metagenomic microbial findings from primary specimens, mass spectra captured from cultured bacterial isolates, and large digital images, which is the medium that Mathison et al. chose to use. AI in general and computer vision in specific are emerging tools that clinical microbiologists need to study, develop, and implement in order to improve clinical microbiology.


2020 ◽  
Vol 30 (05) ◽  
pp. 2030011
Author(s):  
Euaggelos E. Zotos

We elucidate the orbital dynamics of a binary system of two magnetic dipoles, by utilizing the grid classification method. Our target is to unveil how the total energy (expressed through the Jacobi constant), as well as the ratio of the magnetic moments affect the character of the trajectories of the test particle. By integrating numerically large data sets of starting conditions of trajectories in different types of 2D maps, we manage to reveal the basins corresponding to bounded, close encounter and escape motion, along with the respective time scales of the phenomena.


2022 ◽  
Author(s):  
Kevin Muriithi Mirera

Data mining is a way to extract knowledge out of generally large data sets; in other words, it is an approach to discover hidden relationships among data by using artificial intelligence methods. This has made it an important field in research. Law is one of the most important fields for applying data mining given the plethora of data from law cases stenographer data to lawsuit data. Text summarization in NLP (Natural Language Processing) is the process of summarizing the information on large texts for quicker consumption it is an extremely useful technique in NLP. Identifying law case characteristics is the first step for developing further analysis. An approach based on data mining techniques is discussed in this paper to extract important entities from law cases which are written in plain text. The process will involve different Artificial intelligence techniques including clustering or other unsupervised or supervised learning techniques.


2021 ◽  
Author(s):  
Ling-Hong Hung ◽  
Evan Straw ◽  
Shishir Reddy ◽  
Zachary Colburn ◽  
Ka Yee Yeung

Biomedical image analyses can require many steps processing different types of data. Analysis of increasingly large data sets often exceeds the capacity of local computational resources. We present an easy-to-use and modular cloud platform that allows biomedical researchers to reproducibly execute and share complex analytical workflows to process large image datasets. The workflows and the platform are encapsulated in software containers to ensure reproducibility and facilitate installation of even the most complicated workflows. The platform is both graphical and interactive allowing users to use the viewer of their choice to adjust the image pre-processing and analysis steps to iteratively improve the final results. We demonstrate the utility of our platform via two use cases in focal adhesion and 3D imaging analyses. In particular, our focal adhesion workflow demonstrates integration of Fiji with Jupyter Notebooks. Our 3D imaging use case applies Fiji/BigStitcher to big datasets on the cloud. The accessibility and modularity of the cloud platform democratizes the application and development of complex image analysis workflows.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Sign in / Sign up

Export Citation Format

Share Document