scholarly journals Computer Vision and Artificial Intelligence Are Emerging Diagnostic Tools for the Clinical Microbiologist

2020 ◽  
Vol 58 (6) ◽  
Author(s):  
Daniel D. Rhoads

ABSTRACT Artificial intelligence (AI) is increasingly becoming an important component of clinical microbiology informatics. Researchers, microbiologists, laboratorians, and diagnosticians are interested in AI-based testing because these solutions have the potential to improve a test’s turnaround time, quality, and cost. A study by Mathison et al. used computer vision AI (B. A. Mathison, J. L. Kohan, J. F. Walker, R. B. Smith, et al., J Clin Microbiol 58:e02053-19, 2020, https://doi.org/10.1128/JCM.02053-19), but additional opportunities for AI applications exist within the clinical microbiology laboratory. Large data sets within clinical microbiology that are amenable to the development of AI diagnostics include genomic information from isolated bacteria, metagenomic microbial findings from primary specimens, mass spectra captured from cultured bacterial isolates, and large digital images, which is the medium that Mathison et al. chose to use. AI in general and computer vision in specific are emerging tools that clinical microbiologists need to study, develop, and implement in order to improve clinical microbiology.

2020 ◽  
Vol 24 (01) ◽  
pp. 003-011 ◽  
Author(s):  
Narges Razavian ◽  
Florian Knoll ◽  
Krzysztof J. Geras

AbstractArtificial intelligence (AI) has made stunning progress in the last decade, made possible largely due to the advances in training deep neural networks with large data sets. Many of these solutions, initially developed for natural images, speech, or text, are now becoming successful in medical imaging. In this article we briefly summarize in an accessible way the current state of the field of AI. Furthermore, we highlight the most promising approaches and describe the current challenges that will need to be solved to enable broad deployment of AI in clinical practice.


2021 ◽  
pp. 1-12
Author(s):  
Alex K. Piel ◽  
Serge A. Wich

For decades, conservation has lagged behind the rate and scale of some of the world’s primary environmental challenges, with scientists unable to collect, monitor, and incorporate sufficient data necessary to support addressing global threats to wildlife and their habitat. However, with innovative technology, we are rapidly improving the way that scientists can provide data for decision-makers. We can now monitor key ecosystem components in near real-time, remotely revealing changes from the scale of individual trees up to entire forest blocs. Data collectors use smartphones to identify and report illegal human activity such as poaching and logging, relaying information to critical stakeholders. Finally, computer scientists are developing algorithms to more efficiently process incoming large data sets, minimizing turnaround time from data collection to taking preventive steps for species conservation. In some cases, the speed of technological solutions has outpaced the ethical guidelines to limit their use, especially when resulting data may infringe on people’s privacy. Regardless, the progress has thrust technological solutions for biological problems to the forefront of conservation. The threats to biodiversity show little sign of abating, but technology is narrowing the gap between the tempo and scale of the problem, and our understanding of how to develop solutions.


2021 ◽  
Vol 2 (2) ◽  
pp. 19-33
Author(s):  
Adam Urban ◽  
David Hick ◽  
Joerg Rainer Noennig ◽  
Dietrich Kammer

Exploring the phenomenon of artificial intelligence (AI) applications in urban planning and governance, this article reviews most current smart city developments and outlines the future potential of AI, especially in the context of participatory urban design. It concludes that especially the algorithmic analysis and synthesis of large data sets generated by massive user participation projects present a beneficial field of application that enables better design decision making, project validation, and evaluation.


2014 ◽  
Vol 86 (20) ◽  
pp. 10231-10238 ◽  
Author(s):  
W. Gary Mallard ◽  
N. Rabe Andriamaharavo ◽  
Yuri A. Mirokhin ◽  
John M. Halket ◽  
Stephen E. Stein

2020 ◽  
Vol 19 (6) ◽  
pp. 133-144
Author(s):  
A.A. Ivshin ◽  
◽  
A.V. Gusev ◽  
R.E. Novitskiy ◽  
◽  
...  

Artificial intelligence (AI) has recently become an object of interest for specialists from various fields of science and technology, including healthcare professionals. Significantly increased funding for the development of AI models confirms this fact. Advances in machine learning (ML), availability of large data sets, and increasing processing power of computers promote the implementation of AI in many areas of human activity. Being a type of AI, machine learning allows automatic development of mathematical models using large data sets. These models can be used to address multiple problems, such as prediction of various events in obstetrics and neonatology. Further integration of artificial intelligence in perinatology will facilitate the development of this important area in the future. This review covers the main aspects of artificial intelligence and machine learning, their possible application in healthcare, potential limitations and problems, as well as outlooks in the context of AI integration into perinatal medicine. Key words: artificial intelligence, cardiotocography, neonatal asphyxia, fetal congenital abnormalities, fetal hypoxia, machine learning, neural networks, prediction, prognosis, perinatal risk, prenatal diagnosis


2022 ◽  
Author(s):  
Kevin Muriithi Mirera

Data mining is a way to extract knowledge out of generally large data sets; in other words, it is an approach to discover hidden relationships among data by using artificial intelligence methods. This has made it an important field in research. Law is one of the most important fields for applying data mining given the plethora of data from law cases stenographer data to lawsuit data. Text summarization in NLP (Natural Language Processing) is the process of summarizing the information on large texts for quicker consumption it is an extremely useful technique in NLP. Identifying law case characteristics is the first step for developing further analysis. An approach based on data mining techniques is discussed in this paper to extract important entities from law cases which are written in plain text. The process will involve different Artificial intelligence techniques including clustering or other unsupervised or supervised learning techniques.


Author(s):  
Erich Sorantin ◽  
Michael G. Grasser ◽  
Ariane Hemmelmayr ◽  
Sebastian Tschauner ◽  
Franko Hrzic ◽  
...  

AbstractIn medicine, particularly in radiology, there are great expectations in artificial intelligence (AI), which can “see” more than human radiologists in regard to, for example, tumor size, shape, morphology, texture and kinetics — thus enabling better care by earlier detection or more precise reports. Another point is that AI can handle large data sets in high-dimensional spaces. But it should not be forgotten that AI is only as good as the training samples available, which should ideally be numerous enough to cover all variants. On the other hand, the main feature of human intelligence is content knowledge and the ability to find near-optimal solutions. The purpose of this paper is to review the current complexity of radiology working places, to describe their advantages and shortcomings. Further, we give an AI overview of the different types and features as used so far. We also touch on the differences between AI and human intelligence in problem-solving. We present a new AI type, labeled “explainable AI,” which should enable a balance/cooperation between AI and human intelligence — thus bringing both worlds in compliance with legal requirements. For support of (pediatric) radiologists, we propose the creation of an AI assistant that augments radiologists and keeps their brain free for generic tasks.


2021 ◽  
Vol 5 (3) ◽  
pp. 5-17
Author(s):  
Yuanyuan Liu ◽  
Qianqian Liu

In recent years, self-supervised learning which does not require a large number of manual labels generate supervised signals through the data itself to attain the characterization learning of samples. Self-supervised learning solves the problem of learning semantic features from unlabeled data, and realizes pre-training of models in large data sets. Its significant advantages have been extensively studied by scholars in recent years. There are usually three types of self-supervised learning: “Generative, Contrastive, and Generative-Contrastive.” The model of the comparative learning method is relatively simple, and the performance of the current downstream task is comparable to that of the supervised learning method. Therefore, we propose a conceptual analysis framework: data augmentation pipeline, architectures, pretext tasks, comparison methods, semi-supervised fine-tuning. Based on this conceptual framework, we qualitatively analyze the existing comparative self-supervised learning methods for computer vision, and then further analyze its performance at different stages, and finally summarize the research status of self-supervised comparative learning methods in other fields.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Sign in / Sign up

Export Citation Format

Share Document