Frontiers in Artificial Intelligence
Latest Publications


TOTAL DOCUMENTS

337
(FIVE YEARS 336)

H-INDEX

5
(FIVE YEARS 4)

Published By Frontiers Media Sa

2624-8212

2022 ◽  
Vol 4 ◽  
Author(s):  
Naoya Tanabe ◽  
Shizuo Kaji ◽  
Hiroshi Shima ◽  
Yusuke Shiraishi ◽  
Tomoki Maetani ◽  
...  

Chest computed tomography (CT) is used to screen for lung cancer and evaluate pulmonary and extra-pulmonary abnormalities such as emphysema and coronary artery calcification, particularly in smokers. In real-world practice, lung abnormalities are visually assessed using high-contrast thin-slice images which are generated from raw scan data using sharp reconstruction kernels with the sacrifice of increased image noise. In contrast, accurate CT quantification requires low-contrast thin-slice images with low noise, which are generated using soft reconstruction kernels. However, only sharp-kernel thin-slice images are archived in many medical facilities due to limited data storage space. This study aimed to establish deep neural network (DNN) models to convert sharp-kernel images to soft-kernel-like images with a final goal to reuse historical chest CT images for robust quantitative measurements, particularly in completed previous longitudinal studies. By using pairs of sharp-kernel (input) and soft-kernel (ground-truth) images from 30 patients with chronic obstructive pulmonary disease (COPD), DNN models were trained. Then, the accuracy of kernel conversion based on the established DNN models was evaluated using CT from independent 30 smokers with and without COPD. Consequently, differences in CT values between new images converted from sharp-kernel images using the established DNN models and ground-truth soft-kernel images were comparable with the inter-scans variability derived from repeated phantom scans (6 times), showing that the conversion error was the same level as the measurement error of the CT device. Moreover, the Dice coefficients to quantify the similarity between low attenuation voxels on given images and the ground-truth soft-kernel images were significantly higher on the DNN-converted images than the Gaussian-filtered, median-filtered, and sharp-kernel images (p < 0.001). There were good agreements in quantitative measurements of emphysema, intramuscular adipose tissue, and coronary artery calcification between the converted and the ground-truth soft-kernel images. These findings demonstrate the validity of the new DNN model for kernel conversion and the clinical applicability of soft-kernel-like images converted from archived sharp-kernel images in previous clinical studies. The presented method to evaluate the validity of the established DNN model using repeated scans of phantom could be applied to various deep learning-based image conversions for robust quantitative evaluation.


2022 ◽  
Vol 4 ◽  
Author(s):  
Ziyan Yang ◽  
Leticia Pinto-Alva ◽  
Franck Dernoncourt ◽  
Vicente Ordonez

People are able to describe images using thousands of languages, but languages share only one visual world. The aim of this work is to use the learned intermediate visual representations from a deep convolutional neural network to transfer information across languages for which paired data is not available in any form. Our work proposes using backpropagation-based decoding coupled with transformer-based multilingual-multimodal language models in order to obtain translations between any languages used during training. We particularly show the capabilities of this approach in the translation of German-Japanese and Japanese-German sentence pairs, given a training data of images freely associated with text in English, German, and Japanese but for which no single image contains annotations in both Japanese and German. Moreover, we demonstrate that our approach is also generally useful in the multilingual image captioning task when sentences in a second language are available at test time. The results of our method also compare favorably in the Multi30k dataset against recently proposed methods that are also aiming to leverage images as an intermediate source of translations.


2022 ◽  
Vol 4 ◽  
Author(s):  
David Orrell ◽  
Monireh Houshmand

This paper describes an approach to economics that is inspired by quantum computing, and is motivated by the need to develop a consistent quantum mathematical framework for economics. The traditional neoclassical approach assumes that rational utility-optimisers drive market prices to a stable equilibrium, subject to external perturbations or market failures. While this approach has been highly influential, it has come under increasing criticism following the financial crisis of 2007/8. The quantum approach, in contrast, is inherently probabilistic and dynamic. Decision-makers are described, not by a utility function, but by a propensity function which specifies the probability of transacting. We show how a number of cognitive phenomena such as preference reversal and the disjunction effect can be modelled by using a simple quantum circuit to generate an appropriate propensity function. Conversely, a general propensity function can be quantized, via an entropic force, to incorporate effects such as interference and entanglement that characterise human decision-making. Applications to some common problems and topics in economics and finance, including the use of quantum artificial intelligence, are discussed.


2022 ◽  
Vol 4 ◽  
Author(s):  
Matthew D. Stocker ◽  
Yakov A. Pachepsky ◽  
Robert L. Hill

The microbial quality of irrigation water is an important issue as the use of contaminated waters has been linked to several foodborne outbreaks. To expedite microbial water quality determinations, many researchers estimate concentrations of the microbial contamination indicator Escherichia coli (E. coli) from the concentrations of physiochemical water quality parameters. However, these relationships are often non-linear and exhibit changes above or below certain threshold values. Machine learning (ML) algorithms have been shown to make accurate predictions in datasets with complex relationships. The purpose of this work was to evaluate several ML models for the prediction of E. coli in agricultural pond waters. Two ponds in Maryland were monitored from 2016 to 2018 during the irrigation season. E. coli concentrations along with 12 other water quality parameters were measured in water samples. The resulting datasets were used to predict E. coli using stochastic gradient boosting (SGB) machines, random forest (RF), support vector machines (SVM), and k-nearest neighbor (kNN) algorithms. The RF model provided the lowest RMSE value for predicted E. coli concentrations in both ponds in individual years and over consecutive years in almost all cases. For individual years, the RMSE of the predicted E. coli concentrations (log10 CFU 100 ml−1) ranged from 0.244 to 0.346 and 0.304 to 0.418 for Pond 1 and 2, respectively. For the 3-year datasets, these values were 0.334 and 0.381 for Pond 1 and 2, respectively. In most cases there was no significant difference (P > 0.05) between the RMSE of RF and other ML models when these RMSE were treated as statistics derived from 10-fold cross-validation performed with five repeats. Important E. coli predictors were turbidity, dissolved organic matter content, specific conductance, chlorophyll concentration, and temperature. Model predictive performance did not significantly differ when 5 predictors were used vs. 8 or 12, indicating that more tedious and costly measurements provide no substantial improvement in the predictive accuracy of the evaluated algorithms.


2022 ◽  
Vol 4 ◽  
Author(s):  
Kaiqi Zhang ◽  
Cole Hawkins ◽  
Zheng Zhang

A major challenge in many machine learning tasks is that the model expressive power depends on model size. Low-rank tensor methods are an efficient tool for handling the curse of dimensionality in many large-scale machine learning models. The major challenges in training a tensor learning model include how to process the high-volume data, how to determine the tensor rank automatically, and how to estimate the uncertainty of the results. While existing tensor learning focuses on a specific task, this paper proposes a generic Bayesian framework that can be employed to solve a broad class of tensor learning problems such as tensor completion, tensor regression, and tensorized neural networks. We develop a low-rank tensor prior for automatic rank determination in nonlinear problems. Our method is implemented with both stochastic gradient Hamiltonian Monte Carlo (SGHMC) and Stein Variational Gradient Descent (SVGD). We compare the automatic rank determination and uncertainty quantification of these two solvers. We demonstrate that our proposed method can determine the tensor rank automatically and can quantify the uncertainty of the obtained results. We validate our framework on tensor completion tasks and tensorized neural network training tasks.


2022 ◽  
Vol 4 ◽  
Author(s):  
Reuth Mirsky ◽  
Ran Galun ◽  
Kobi Gal ◽  
Gal Kaminka

Plan recognition deals with reasoning about the goals and execution process of an actor, given observations of its actions. It is one of the fundamental problems of AI, applicable to many domains, from user interfaces to cyber-security. Despite the prevalence of these approaches, they lack a standard representation, and have not been compared using a common testbed. This paper provides a first step towards bridging this gap by providing a standard plan library representation that can be used by hierarchical, discrete-space plan recognition and evaluation criteria to consider when comparing plan recognition algorithms. This representation is comprehensive enough to describe a variety of known plan recognition problems and can be easily used by existing algorithms in this class. We use this common representation to thoroughly compare two known approaches, represented by two algorithms, SBR and Probabilistic Hostile Agent Task Tracker (PHATT). We provide meaningful insights about the differences and abilities of these algorithms, and evaluate these insights both theoretically and empirically. We show a tradeoff between expressiveness and efficiency: SBR is usually superior to PHATT in terms of computation time and space, but at the expense of functionality and representational compactness. We also show how different properties of the plan library affect the complexity of the recognition process, regardless of the concrete algorithm used. Lastly, we show how these insights can be used to form a new algorithm that outperforms existing approaches both in terms of expressiveness and efficiency.


2022 ◽  
Vol 4 ◽  
Author(s):  
Susanne Gerber ◽  
Lukas Pospisil ◽  
Stanislav Sys ◽  
Charlotte Hewel ◽  
Ali Torkamani ◽  
...  

Mislabeling of cases as well as controls in case–control studies is a frequent source of strong bias in prognostic and diagnostic tests and algorithms. Common data processing methods available to the researchers in the biomedical community do not allow for consistent and robust treatment of labeled data in the situations where both, the case and the control groups, contain a non-negligible proportion of mislabeled data instances. This is an especially prominent issue in studies regarding late-onset conditions, where individuals who may convert to cases may populate the control group, and for screening studies that often have high false-positive/-negative rates. To address this problem, we propose a method for a simultaneous robust inference of Lasso reduced discriminative models and of latent group-specific mislabeling risks, not requiring any exactly labeled data. We apply it to a standard breast cancer imaging dataset and infer the mislabeling probabilities (being rates of false-negative and false-positive core-needle biopsies) together with a small set of simple diagnostic rules, outperforming the state-of-the-art BI-RADS diagnostics on these data. The inferred mislabeling rates for breast cancer biopsies agree with the published purely empirical studies. Applying the method to human genomic data from a healthy-ageing cohort reveals a previously unreported compact combination of single-nucleotide polymorphisms that are strongly associated with a healthy-ageing phenotype for Caucasians. It determines that 7.5% of Caucasians in the 1000 Genomes dataset (selected as a control group) carry a pattern characteristic of healthy ageing.


2022 ◽  
Vol 4 ◽  
Author(s):  
Gauri Jagatap ◽  
Ameya Joshi ◽  
Animesh Basak Chowdhury ◽  
Siddharth Garg ◽  
Chinmay Hegde

In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.


2022 ◽  
Vol 4 ◽  
Author(s):  
Neil Cohn ◽  
Joost Schilperoord

Language is typically embedded in multimodal communication, yet models of linguistic competence do not often incorporate this complexity. Meanwhile, speech, gesture, and/or pictures are each considered as indivisible components of multimodal messages. Here, we argue that multimodality should not be characterized by whole interacting behaviors, but by interactions of similar substructures which permeate across expressive behaviors. These structures comprise a unified architecture and align within Jackendoff's Parallel Architecture: a modality, meaning, and grammar. Because this tripartite architecture persists across modalities, interactions can manifest within each of these substructures. Interactions between modalities alone create correspondences in time (ex. speech with gesture) or space (ex. writing with pictures) of the sensory signals, while multimodal meaning-making balances how modalities carry “semantic weight” for the gist of the whole expression. Here we focus primarily on interactions between grammars, which contrast across two variables: symmetry, related to the complexity of the grammars, and allocation, related to the relative independence of interacting grammars. While independent allocations keep grammars separate, substitutive allocation inserts expressions from one grammar into those of another. We show that substitution operates in interactions between all three natural modalities (vocal, bodily, graphic), and also in unimodal contexts within and between languages, as in codeswitching. Altogether, we argue that unimodal and multimodal expressions arise as emergent interactive states from a unified cognitive architecture, heralding a reconsideration of the “language faculty” itself.


2022 ◽  
Vol 4 ◽  
Author(s):  
Lasitha Vidyaratne ◽  
Adam Carpenter ◽  
Tom Powers ◽  
Chris Tennant ◽  
Khan M. Iftekharuddin ◽  
...  

This work investigates the efficacy of deep learning (DL) for classifying C100 superconducting radio-frequency (SRF) cavity faults in the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. CEBAF is a large, high-power continuous wave recirculating linac that utilizes 418 SRF cavities to accelerate electrons up to 12 GeV. Recent upgrades to CEBAF include installation of 11 new cryomodules (88 cavities) equipped with a low-level RF system that records RF time-series data from each cavity at the onset of an RF failure. Typically, subject matter experts (SME) analyze this data to determine the fault type and identify the cavity of origin. This information is subsequently utilized to identify failure trends and to implement corrective measures on the offending cavity. Manual inspection of large-scale, time-series data, generated by frequent system failures is tedious and time consuming, and thereby motivates the use of machine learning (ML) to automate the task. This study extends work on a previously developed system based on traditional ML methods (Tennant and Carpenter and Powers and Shabalina Solopova and Vidyaratne and Iftekharuddin, Phys. Rev. Accel. Beams, 2020, 23, 114601), and investigates the effectiveness of deep learning approaches. The transition to a DL model is driven by the goal of developing a system with sufficiently fast inference that it could be used to predict a fault event and take actionable information before the onset (on the order of a few hundred milliseconds). Because features are learned, rather than explicitly computed, DL offers a potential advantage over traditional ML. Specifically, two seminal DL architecture types are explored: deep recurrent neural networks (RNN) and deep convolutional neural networks (CNN). We provide a detailed analysis on the performance of individual models using an RF waveform dataset built from past operational runs of CEBAF. In particular, the performance of RNN models incorporating long short-term memory (LSTM) are analyzed along with the CNN performance. Furthermore, comparing these DL models with a state-of-the-art fault ML model shows that DL architectures obtain similar performance for cavity identification, do not perform quite as well for fault classification, but provide an advantage in inference speed.


Sign in / Sign up

Export Citation Format

Share Document