scholarly journals Spatially-Aware Clustering of Ion Images in Mass Spectrometry Imaging Data Using Deep Learning

2020 ◽  
Author(s):  
Wanqiu Zhang ◽  
Marc Claesen ◽  
Thomas Moerman ◽  
M. Reid Groseclose ◽  
Etienne Waelkens ◽  
...  

AbstractComputational analysis is crucial to capitalize on the wealth of spatio-molecular information generated by mass spectrometry imaging (MSI) experiments. Currently, the spatial information available in MSI data is often under-utilized, due to the challenges of in-depth spatial pattern extraction.The advent of deep learning has greatly facilitated such complex spatial analysis. In this work, we use a pre-trained neural network to extract high-level features from ion images in MSI data, and test whether this improves downstream data analysis. The resulting neural network interpretation of ion images, coined neural ion images, are used to cluster ion images based on spatial expressions.We evaluate the impact of neural ion images on two ion image clustering pipelines, namely DBSCAN clustering, combined with UMAP-based dimensionality reduction, and k-means clustering. In both pipelines, we compare regular and neural ion images from two different MSI datasets. All tested pipelines could extract underlying spatial patterns, but the neural network-based pipelines provided better assignment of ion images, with more fine-grained clusters, and greater consistency in the spatial structures assigned to individual clusters.Additionally, we introduce the Relative Isotope Ratio metric to quantitatively evaluate clustering quality. The resulting scores show that isotopical m/z values are more often clustered together in the neural network-based pipeline, indicating improved clustering outcomes.The usefulness of neural ion images extends beyond clustering towards a generic framework to incorporate spatial information into any MSI-focused machine learning pipeline, both supervised and unsupervised.

2021 ◽  
Vol 413 (10) ◽  
pp. 2803-2819
Author(s):  
Wanqiu Zhang ◽  
Marc Claesen ◽  
Thomas Moerman ◽  
M. Reid Groseclose ◽  
Etienne Waelkens ◽  
...  

AbstractComputational analysis is crucial to capitalize on the wealth of spatio-molecular information generated by mass spectrometry imaging (MSI) experiments. Currently, the spatial information available in MSI data is often under-utilized, due to the challenges of in-depth spatial pattern extraction. The advent of deep learning has greatly facilitated such complex spatial analysis. In this work, we use a pre-trained neural network to extract high-level features from ion images in MSI data, and test whether this improves downstream data analysis. The resulting neural network interpretation of ion images, coined neural ion images, is used to cluster ion images based on spatial expressions. We evaluate the impact of neural ion images on two ion image clustering pipelines, namely DBSCAN clustering, combined with UMAP-based dimensionality reduction, and k-means clustering. In both pipelines, we compare regular and neural ion images from two different MSI datasets. All tested pipelines could extract underlying spatial patterns, but the neural network-based pipelines provided better assignment of ion images, with more fine-grained clusters, and greater consistency in the spatial structures assigned to individual clusters. Additionally, we introduce the relative isotope ratio metric to quantitatively evaluate clustering quality. The resulting scores show that isotopical m/z values are more often clustered together in the neural network-based pipeline, indicating improved clustering outcomes. The usefulness of neural ion images extends beyond clustering towards a generic framework to incorporate spatial information into any MSI-focused machine learning pipeline, both supervised and unsupervised. Graphical abstract


2020 ◽  
Author(s):  
Raphaël La Rocca ◽  
Christopher Kune ◽  
Mathieu Tiquet ◽  
Lachlan Stuart ◽  
Theodore Alexandrov ◽  
...  

<p>Mass spectrometry imaging (MSI) is a powerful and convenient method to reveal the spatial chemical composition of different biological samples. The molecular annotation of the detected signals is only possible when high mass accuracy is maintained across the entire image and the <i>m/z</i> range. However, the heterogeneous molecular composition of biological samples could result in fluctuations in the detected <i>m/z</i>-values, called mass shift. Mass shifts impact the interpretability of the detected signals by decreasing the number of annotations and by affecting the spatial consistency and accuracy of ion images. The use of internal calibration is known to offer the best solution to avoid, or at least to reduce, mass shifts. The selection of internal calibrating signals for a global MSI acquisition is not trivial, prone to false positive detection of calibrating signals and therefore to poor recalibration. To fill this gap, this work describes an algorithm that recalibrates each spectrum individually by estimating its mass shift with the help of a list of internal calibrating ions generated automatically in a data-adaptive manner. The method exploits RANSAC (<i>Random Sample Consensus</i>) algorithm, to select, in a robust manner, the experimental signal corresponding to internal calibrating signals by filtering out calibration points with infrequent mass errors and by using the remaining points to estimate a linear model of the mass shifts. We applied the method to a zebrafish whole body section acquired at high mass resolution to demonstrate the impact of mass shift on data analysis and the capacity of our algorithm to recalibrate MSI data. We illustrate the broad applicability of the method by recalibrating 31 different public MSI datasets from METASPACE from various samples and types of MSI and show that our recalibration significantly increases the numbers of METASPACE annotations, especially the high-confident annotations at a low false discovery rate.</p>


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


2018 ◽  
Vol 29 (12) ◽  
pp. 2467-2470 ◽  
Author(s):  
Måns Ekelöf ◽  
Kenneth P. Garrard ◽  
Rika Judd ◽  
Elias P. Rosen ◽  
De-Yu Xie ◽  
...  

Metabolomics ◽  
2017 ◽  
Vol 13 (11) ◽  
Author(s):  
Nicholas J. Bond ◽  
Albert Koulman ◽  
Julian L. Griffin ◽  
Zoe Hall

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1672
Author(s):  
Luya Lian ◽  
Tianer Zhu ◽  
Fudong Zhu ◽  
Haihua Zhu

Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert dentists. Methods: A total of 1160 dental panoramic films were evaluated by three expert dentists. All caries lesions in the films were marked with circles, whose combination was defined as the reference dataset. A training and validation dataset (1071) and a test dataset (89) were then established from the reference dataset. A convolutional neural network, called nnU-Net, was applied to detect caries lesions, and DenseNet121 was applied to classify the lesions according to their depths (dentin lesions in the outer, middle, or inner third D1/2/3 of dentin). The performance of the test dataset in the trained nnU-Net and DenseNet121 models was compared with the results of six expert dentists in terms of the intersection over union (IoU), Dice coefficient, accuracy, precision, recall, negative predictive value (NPV), and F1-score metrics. Results: nnU-Net yielded caries lesion segmentation IoU and Dice coefficient values of 0.785 and 0.663, respectively, and the accuracy and recall rate of nnU-Net were 0.986 and 0.821, respectively. The results of the expert dentists and the neural network were shown to be no different in terms of accuracy, precision, recall, NPV, and F1-score. For caries depth classification, DenseNet121 showed an overall accuracy of 0.957 for D1 lesions, 0.832 for D2 lesions, and 0.863 for D3 lesions. The recall results of the D1/D2/D3 lesions were 0.765, 0.652, and 0.918, respectively. All metric values, including accuracy, precision, recall, NPV, and F1-score values, were proven to be no different from those of the experienced dentists. Conclusion: In detecting and classifying caries lesions on dental panoramic radiographs, the performance of deep learning methods was similar to that of expert dentists. The impact of applying these well-trained neural networks for disease diagnosis and treatment decision making should be explored.


Sign in / Sign up

Export Citation Format

Share Document