Single-cell classification of foodborne pathogens using hyperspectral microscope imaging coupled with deep learning frameworks

2020 ◽  
Vol 309 ◽  
pp. 127789 ◽  
Author(s):  
Rui Kang ◽  
Bosoon Park ◽  
Matthew Eady ◽  
Qin Ouyang ◽  
Kunjie Chen
2020 ◽  
Author(s):  
Quentin Juppet ◽  
Fabio De Martino ◽  
Martin Weigert ◽  
Olivier Burri ◽  
Michaël Unser ◽  
...  

AbstractPatient-Derived Xenografts (PDXs) are the preclinical models which best recapitulate inter- and intra-patient complexity of human breast malignancies, and are also emerging as useful tools to study the normal breast epithelium. However, data analysis generated with such models is often confounded by the presence of host cells and can give rise to data misinterpretation. For instance, it is important to discriminate between xenografted and host cells in histological sections prior to performing immunostainings. We developed Single Cell Classifier (SCC), a data-driven deep learning-based computational tool that provides an innovative approach for automated cell species discrimination based on a multi-step process entailing nuclei segmentation and single cell classification. We show that human and murine cells contextual features, more than cell-intrinsic ones, can be exploited to discriminate between cell species in both normal and malignant tissues, yielding up to 96% classification accuracy. SCC will facilitate the interpretation of H&E stained histological sections of xenografted human-in-mouse tissues and it is open to new in-house built models for further applications. SCC is released as an open-source plugin in ImageJ/Fiji available at the following link: https://github.com/Biomedical-Imaging-Group/SingleCellClassifier.Author summaryBreast cancer is the most commonly diagnosed tumor in women worldwide and its incidence in the population is increasing over time. Because our understanding of such disease has been hampered by the lack of adequate human preclinical model, efforts have been made in order to develop better approaches to model the human complexity. Recent advances in this regard were achieved with Patient-Derived Xenografts (PDXs), which entail the implantation of human-derived specimens to recipient immunosuppressed mice and are, thus far, the preclinical system best recapitulating the heterogeneity of both normal and malignant human tissues. However, histological analyses of the resulting tissues are usually confounded by the presence of cells of different species. To circumvent this hurdle and to facilitate the discrimination of human and murine cells in xenografted samples, we developed Single Cell Classifier (SCC), a deep learning-based open-source software, available as a plugin in ImageJ/Fiji, performing automated species classification of individual cells in H&E stained sections. We show that SCC can reach up to 96% classification accuracy to classify cells of different species mainly leveraging on their contextual features in both normal and tumor PDXs. SCC will improve and automate histological analyses of human-in-mouse xenografts and is open to new in-house built models for further classification tasks and applications in image analysis.


2019 ◽  
Vol 48 (1) ◽  
pp. 113-122 ◽  
Author(s):  
Muhammad Shahid Iqbal ◽  
Saeed El-Ashram ◽  
Sajid Hussain ◽  
Tamoor Khan ◽  
Shujian Huang ◽  
...  

2011 ◽  
Vol 65 (10) ◽  
pp. 1116-1125 ◽  
Author(s):  
Angela Walter ◽  
Wilm Schumacher ◽  
Thomas Bocklitz ◽  
Martin Reinicke ◽  
Petra Rösch ◽  
...  

Author(s):  
M. Papadomanolaki ◽  
M. Vakalopoulou ◽  
S. Zagoruyko ◽  
K. Karantzalos

In this paper we evaluated deep-learning frameworks based on Convolutional Neural Networks for the accurate classification of multispectral remote sensing data. Certain state-of-the-art models have been tested on the publicly available SAT-4 and SAT-6 high resolution satellite multispectral datasets. In particular, the performed benchmark included the <i>AlexNet</i>, <i>AlexNet-small</i> and <i>VGG</i> models which had been trained and applied to both datasets exploiting all the available spectral information. Deep Belief Networks, Autoencoders and other semi-supervised frameworks have been, also, compared. The high level features that were calculated from the tested models managed to classify the different land cover classes with significantly high accuracy rates <i>i.e.</i>, above 99.9%. The experimental results demonstrate the great potentials of advanced deep-learning frameworks for the supervised classification of high resolution multispectral remote sensing data.


BME Frontiers ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
DongHun Ryu ◽  
Jinho Kim ◽  
Daejin Lim ◽  
Hyun-Seok Min ◽  
In Young Yoo ◽  
...  

Objective and Impact Statement. We propose a rapid and accurate blood cell identification method exploiting deep learning and label-free refractive index (RI) tomography. Our computational approach that fully utilizes tomographic information of bone marrow (BM) white blood cell (WBC) enables us to not only classify the blood cells with deep learning but also quantitatively study their morphological and biochemical properties for hematology research. Introduction. Conventional methods for examining blood cells, such as blood smear analysis by medical professionals and fluorescence-activated cell sorting, require significant time, costs, and domain knowledge that could affect test results. While label-free imaging techniques that use a specimen’s intrinsic contrast (e.g., multiphoton and Raman microscopy) have been used to characterize blood cells, their imaging procedures and instrumentations are relatively time-consuming and complex. Methods. The RI tomograms of the BM WBCs are acquired via Mach-Zehnder interferometer-based tomographic microscope and classified by a 3D convolutional neural network. We test our deep learning classifier for the four types of bone marrow WBC collected from healthy donors (n=10): monocyte, myelocyte, B lymphocyte, and T lymphocyte. The quantitative parameters of WBC are directly obtained from the tomograms. Results. Our results show >99% accuracy for the binary classification of myeloids and lymphoids and >96% accuracy for the four-type classification of B and T lymphocytes, monocyte, and myelocytes. The feature learning capability of our approach is visualized via an unsupervised dimension reduction technique. Conclusion. We envision that the proposed cell classification framework can be easily integrated into existing blood cell investigation workflows, providing cost-effective and rapid diagnosis for hematologic malignancy.


2019 ◽  
Author(s):  
Eric Prince ◽  
Todd C. Hankinson

ABSTRACTHigh throughput data is commonplace in biomedical research as seen with technologies such as single-cell RNA sequencing (scRNA-seq) and other Next Generation Sequencing technologies. As these techniques continue to be increasingly utilized it is critical to have analysis tools that can identify meaningful complex relationships between variables (i.e., in the case of scRNA-seq: genes) in a way such that human bias is absent. Moreover, it is equally paramount that both linear and non-linear (i.e., one-to-many) variable relationships be considered when contrasting datasets. HD Spot is a deep learning-based framework that generates an optimal interpretable classifier a given high-throughput dataset using a simple genetic algorithm as well as an autoencoder to classifier transfer learning approach. Using four unique publicly available scRNA-seq datasets with published ground truth, we demonstrate the robustness of HD Spot and the ability to identify ontologically accurate gene lists for a given data subset. HD Spot serves as a bioinformatic tool to allow novice and advanced analysts to gain complex insight into their respective datasets enabling novel hypotheses development.


2021 ◽  
Author(s):  
Lam Pham ◽  
Alexander Schindler ◽  
Mina Schutz ◽  
Jasmin Lampert ◽  
Sven Schlarb ◽  
...  

In this paper, we present deep learning frameworks for audio-visual scene classification (SC) and indicate how individual visual and audio features as well as their combination affect SC performance.Our extensive experiments, which are conducted on DCASE (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) Task 1B development dataset, achieve the best classification accuracy of 82.2\%, 91.1\%, and 93.9\% with audio input only, visual input only, and both audio-visual input, respectively.The highest classification accuracy of 93.9\%, obtained from an ensemble of audio-based and visual-based frameworks, shows an improvement of 16.5\% compared with DCASE baseline.


Author(s):  
M. Papadomanolaki ◽  
M. Vakalopoulou ◽  
S. Zagoruyko ◽  
K. Karantzalos

In this paper we evaluated deep-learning frameworks based on Convolutional Neural Networks for the accurate classification of multispectral remote sensing data. Certain state-of-the-art models have been tested on the publicly available SAT-4 and SAT-6 high resolution satellite multispectral datasets. In particular, the performed benchmark included the <i>AlexNet</i>, <i>AlexNet-small</i> and <i>VGG</i> models which had been trained and applied to both datasets exploiting all the available spectral information. Deep Belief Networks, Autoencoders and other semi-supervised frameworks have been, also, compared. The high level features that were calculated from the tested models managed to classify the different land cover classes with significantly high accuracy rates <i>i.e.</i>, above 99.9%. The experimental results demonstrate the great potentials of advanced deep-learning frameworks for the supervised classification of high resolution multispectral remote sensing data.


2021 ◽  
Vol 7 (8) ◽  
pp. 149
Author(s):  
Mridul Ghosh ◽  
Sk Md Obaidullah ◽  
Francesco Gherardini ◽  
Maria Zdimalova

The paper addresses an image processing problem in the field of fine arts. In particular, a deep learning-based technique to classify geometric forms of artworks, such as paintings and mosaics, is presented. We proposed and tested a convolutional neural network (CNN)-based framework that autonomously quantifies the feature map and classifies it. Convolution, pooling and dense layers are three distinct categories of levels that generate attributes from the dataset images by introducing certain specified filters. As a case study, a Roman mosaic is considered, which is digitally reconstructed by close-range photogrammetry based on standard photos. During the digital transformation from a 2D perspective view of the mosaic into an orthophoto, each photo is rectified (i.e., it is an orthogonal projection of the real photo on the plane of the mosaic). Image samples of the geometric forms, e.g., triangles, squares, circles, octagons and leaves, even if they are partially deformed, were extracted from both the original and the rectified photos and originated the dataset for testing the CNN-based approach. The proposed method has proved to be robust enough to analyze the mosaic geometric forms, with an accuracy higher than 97%. Furthermore, the performance of the proposed method was compared with standard deep learning frameworks. Due to the promising results, this method can be applied to many other pattern identification problems related to artworks.


Sign in / Sign up

Export Citation Format

Share Document