scholarly journals MiSiC, a general deep learning-based method for the high-throughput cell segmentation of complex bacterial communities

2020 ◽  
Author(s):  
Swapnesh Panigrahi ◽  
Dorothée Murat ◽  
Antoine Le Gall ◽  
Eugénie Martineau ◽  
Kelly Goldlust ◽  
...  

AbstractStudies of microbial communities by live imaging require new tools for the robust identification of bacterial cells in dense and often inter-species populations, sometimes over very large scales. Here, we developed MiSiC, a general deep-learning-based segmentation method that automatically segments a wide range of spatially structured bacterial communities with very little parameter adjustment, independent of the imaging modality. Using a bacterial predator-prey interaction model, we demonstrate that MiSiC enables the analysis of interspecies interactions, resolving processes at subcellular scales and discriminating between species in millimeter size datasets. The simple implementation of MiSiC and the relatively low need in computing power make its use broadly accessible to fields interested in bacterial interactions and cell biology.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Swapnesh Panigrahi ◽  
Dorothée Murat ◽  
Antoine Le Gall ◽  
Eugénie Martineau ◽  
Kelly Goldlust ◽  
...  

Studies of bacterial communities, biofilms and microbiomes, are multiplying due to their impact on health and ecology. Live imaging of microbial communities requires new tools for the robust identification of bacterial cells in dense and often inter-species populations, sometimes over very large scales. Here, we developed MiSiC, a general deep-learning-based 2D segmentation method that automatically segments single bacteria in complex images of interacting bacterial communities with very little parameter adjustment, independent of the microscopy settings and imaging modality. Using a bacterial predator-prey interaction model, we demonstrate that MiSiC enables the analysis of interspecies interactions, resolving processes at subcellular scales and discriminating between species in millimeter size datasets. The simple implementation of MiSiC and the relatively low need in computing power make its use broadly accessible to fields interested in bacterial interactions and cell biology.


2021 ◽  
Author(s):  
Hieu H. Pham ◽  
Dung V. Do ◽  
Ha Q. Nguyen

AbstractX-ray imaging in Digital Imaging and Communications in Medicine (DICOM) format is the most commonly used imaging modality in clinical practice, resulting in vast, non-normalized databases. This leads to an obstacle in deploying artificial intelligence (AI) solutions for analyzing medical images, which often requires identifying the right body part before feeding the image into a specified AI model. This challenge raises the need for an automated and efficient approach to classifying body parts from X-ray scans. Unfortunately, to the best of our knowledge, there is no open tool or framework for this task to date. To fill this lack, we introduce a DICOM Imaging Router that deploys deep convolutional neural networks (CNNs) for categorizing unknown DICOM X-ray images into five anatomical groups: abdominal, adult chest, pediatric chest, spine, and others. To this end, a large-scale X-ray dataset consisting of 16,093 images has been collected and manually classified. We then trained a set of state-of-the-art deep CNNs using a training set of 11,263 images. These networks were then evaluated on an independent test set of 2,419 images and showed superior performance in classifying the body parts. Specifically, our best performing model (i.e., MobileNet-V1) achieved a recall of 0.982 (95% CI, 0.977– 0.988), a precision of 0.985 (95% CI, 0.975–0.989) and a F1-score of 0.981 (95% CI, 0.976–0.987), whilst requiring less computation for inference (0.0295 second per image). Our external validity on 1,000 X-ray images shows the robustness of the proposed approach across hospitals. These remarkable performances indicate that deep CNNs can accurately and effectively differentiate human body parts from X-ray scans, thereby providing potential benefits for a wide range of applications in clinical settings. The dataset, codes, and trained deep learning models from this study will be made publicly available on our project website at https://vindr.ai/datasets/bodypartxr.


2021 ◽  
Author(s):  
Sébastien Herbert ◽  
Léo Valon ◽  
Laure Mancini ◽  
Nicolas Dray ◽  
Paolo Caldarelli ◽  
...  

BackgroundQuantitative imaging of epithelial tissues prompts for bioimage analysis tools that are widely applicable and accurate. In the case of imaging 3D tissues, a common post-processing step consists in projecting the acquired 3D volume on a 2D plane mapping the tissue surface. Indeed, while segmenting the tissue cells is amenable on 2D projections, it is still very difficult and cumbersome in 3D. However, for many specimen and models used in Developmental and Cell Biology, the complex content of the image volume surrounding the epithelium in a tissue often reduces the visibility of the biological object in the projection, compromising its subsequent analysis. In addition, the projection will distort the geometry of the tissue and can lead to strong artifacts in the morphology measurement.ResultsHere we introduce DProj a user-friendly tool-box built to robustly project epithelia on their 2D surface from 3D volumes, and to produce accurate morphology measurement corrected for the projection distortion, even for very curved tissues. DProj is built upon two components. LocalZProjector is a user-friendly and configurable Fiji plugin that generates 2D projections and height-maps from potentially large 3D stacks (larger than 40 GB per time-point) by only incorporating the signal of interest, despite a possibly complex image content. DeProj is a MATLAB tool that generates correct morphology measurements by combining the height-map output (such as the one offered by LocalZProjector) and the results of the cell segmentation on the 2D projection. In this paper we demonstrate DProj effectiveness over a wide range of different biological samples. We then compare its performance and accuracy against similar existing tools.ConclusionsWe find that LocalZProjector performs well even in situations where the volume to project contains spurious structures. We show that it can process large images without a pre-processing step. We study the impact of geometrical distortions on morphological measurements induced by the projection. We measured very large distortions which are then corrected by DeProj, providing accurate outputs.


2021 ◽  
Author(s):  
Dejin Xun ◽  
Deheng Chen ◽  
Yitian Zhou ◽  
Volker M. Lauschke ◽  
Rui Wang ◽  
...  

Deep learning-based cell segmentation is increasingly utilized in cell biology and molecular pathology, due to massive accumulation of diverse large-scale datasets and excellent performance in cell representation. However, the development of specialized algorithms has long been hampered by a paucity of annotated training data, whereas the performance of generalist algorithm was limited without experiment-specific calibration. Here, we present a deep learning-based tool called Scellseg consisted of novel pre-trained network architecture and contrastive fine-tuning strategy. In comparison to four commonly used algorithms, Scellseg outperformed in average precision on three diverse datasets with no need for dataset-specific configuration. Interestingly, we found that eight images are sufficient for model tuning to achieve satisfied performance based on a shot data scale experiment. We also developed a graphical user interface integrated with functions of annotation, fine-tuning and inference, that allows biologists to easily specialize their own segmentation model and analyze data at the single-cell level.


2019 ◽  
Author(s):  
Renske van Raaphorst ◽  
Morten Kjos ◽  
Jan-Willem Veening

AbstractHigh-throughput analyses of single-cell microscopy data is a critical tool within the field of bacterial cell biology. Several programs have been developed to specifically segment bacterial cells from phase-contrast images. Together with spot and object detection algorithms, these programs offer powerful approaches to quantify observations from microscopy data, ranging from cell-to-cell genealogy to localization and movement of proteins. Most segmentation programs contain specific post-processing and plotting options, but these options vary between programs and possibilities to optimize or alter the outputs are often limited. Therefore, we developed BactMAP (Bacterial toolbox for Microscopy Analysis & Plotting), a software package that allows researchers to transform cell segmentation and spot detection data generated by different programs automatically into various plots. Furthermore, BactMAP makes it possible to perform custom analyses and change the layout of the output. Because BactMAP works independently of segmentation and detection programs, inputs from different sources can be compared within the same analysis pipeline. BactMAP runs in R, which enables the use of advanced statistical analysis tools as well as easily adjustable plot graphics in every operating system. Using BactMAP we visualize key cell cycle parameters in Bacillus subtilis and Staphylococcus aureus, and demonstrate that the DNA replication forks in Streptococcus pneumoniae dissociate and associate before splitting of the cell, after the Z-ring is formed at the new quarter positions. BactMAP is available from https://veeninglab.com/bactmap.


2020 ◽  
Author(s):  
William D. Cameron ◽  
Alex M. Bennett ◽  
Cindy V. Bui ◽  
Huntley H. Chang ◽  
Jonathan V. Rocheleau

AbstractDeep learning provides an opportunity to automatically segment and extract cellular features from high-throughput microscopy images. Many labeling strategies have been developed for this purpose, ranging from the use of fluorescent markers to label-free approaches. However, differences in the channels available to each respective training dataset make it difficult to directly compare the effectiveness of these strategies across studies. Here we explore training models using subimage stacks composed of channels sampled from larger, ‘hyper-labeled’, image stacks. This allows us to directly compare a variety of labeling strategies and training approaches on identical cells. This approach revealed that fluorescence-based strategies generally provide higher segmentation accuracies but were less accurate than label-free models when labeling was inconsistent. The relative strengths of label and label-free techniques could be combined through the use of merging fluorescence channels and out-of-focus brightfield images. Beyond comparing labeling strategies, using subimage stacks for training was also found to provide a method of simulating a wide range of labeling conditions, increasing the ability of the final model to accommodate a greater range of experimental setups.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 82
Author(s):  
Ahmad O. Aseeri

Deep Learning-based methods have emerged to be one of the most effective and practical solutions in a wide range of medical problems, including the diagnosis of cardiac arrhythmias. A critical step to a precocious diagnosis in many heart dysfunctions diseases starts with the accurate detection and classification of cardiac arrhythmias, which can be achieved via electrocardiograms (ECGs). Motivated by the desire to enhance conventional clinical methods in diagnosing cardiac arrhythmias, we introduce an uncertainty-aware deep learning-based predictive model design for accurate large-scale classification of cardiac arrhythmias successfully trained and evaluated using three benchmark medical datasets. In addition, considering that the quantification of uncertainty estimates is vital for clinical decision-making, our method incorporates a probabilistic approach to capture the model’s uncertainty using a Bayesian-based approximation method without introducing additional parameters or significant changes to the network’s architecture. Although many arrhythmias classification solutions with various ECG feature engineering techniques have been reported in the literature, the introduced AI-based probabilistic-enabled method in this paper outperforms the results of existing methods in outstanding multiclass classification results that manifest F1 scores of 98.62% and 96.73% with (MIT-BIH) dataset of 20 annotations, and 99.23% and 96.94% with (INCART) dataset of eight annotations, and 97.25% and 96.73% with (BIDMC) dataset of six annotations, for the deep ensemble and probabilistic mode, respectively. We demonstrate our method’s high-performing and statistical reliability results in numerical experiments on the language modeling using the gating mechanism of Recurrent Neural Networks.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Kwang-Hyun Uhm ◽  
Seung-Won Jung ◽  
Moon Hyung Choi ◽  
Hong-Kyu Shin ◽  
Jae-Ik Yoo ◽  
...  

AbstractIn 2020, it is estimated that 73,750 kidney cancer cases were diagnosed, and 14,830 people died from cancer in the United States. Preoperative multi-phase abdominal computed tomography (CT) is often used for detecting lesions and classifying histologic subtypes of renal tumor to avoid unnecessary biopsy or surgery. However, there exists inter-observer variability due to subtle differences in the imaging features of tumor subtypes, which makes decisions on treatment challenging. While deep learning has been recently applied to the automated diagnosis of renal tumor, classification of a wide range of subtype classes has not been sufficiently studied yet. In this paper, we propose an end-to-end deep learning model for the differential diagnosis of five major histologic subtypes of renal tumors including both benign and malignant tumors on multi-phase CT. Our model is a unified framework to simultaneously identify lesions and classify subtypes for the diagnosis without manual intervention. We trained and tested the model using CT data from 308 patients who underwent nephrectomy for renal tumors. The model achieved an area under the curve (AUC) of 0.889, and outperformed radiologists for most subtypes. We further validated the model on an independent dataset of 184 patients from The Cancer Imaging Archive (TCIA). The AUC for this dataset was 0.855, and the model performed comparably to the radiologists. These results indicate that our model can achieve similar or better diagnostic performance than radiologists in differentiating a wide range of renal tumors on multi-phase CT.


Sign in / Sign up

Export Citation Format

Share Document