scholarly journals A novel deep learning-based 3D cell segmentation framework for future image-based disease detection

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Andong Wang ◽  
Qi Zhang ◽  
Yang Han ◽  
Sean Megason ◽  
Sahand Hormoz ◽  
...  

AbstractCell segmentation plays a crucial role in understanding, diagnosing, and treating diseases. Despite the recent success of deep learning-based cell segmentation methods, it remains challenging to accurately segment densely packed cells in 3D cell membrane images. Existing approaches also require fine-tuning multiple manually selected hyperparameters on the new datasets. We develop a deep learning-based 3D cell segmentation pipeline, 3DCellSeg, to address these challenges. Compared to the existing methods, our approach carries the following novelties: (1) a robust two-stage pipeline, requiring only one hyperparameter; (2) a light-weight deep convolutional neural network (3DCellSegNet) to efficiently output voxel-wise masks; (3) a custom loss function (3DCellSeg Loss) to tackle the clumped cell problem; and (4) an efficient touching area-based clustering algorithm (TASCAN) to separate 3D cells from the foreground masks. Cell segmentation experiments conducted on four different cell datasets show that 3DCellSeg outperforms the baseline models on the ATAS (plant), HMS (animal), and LRP (plant) datasets with an overall accuracy of 95.6%, 76.4%, and 74.7%, respectively, while achieving an accuracy comparable to the baselines on the Ovules (plant) dataset with an overall accuracy of 82.2%. Ablation studies show that the individual improvements in accuracy is attributable to 3DCellSegNet, 3DCellSeg Loss, and TASCAN, with the 3DCellSeg demonstrating robustness across different datasets and cell shapes. Our results suggest that 3DCellSeg can serve a powerful biomedical and clinical tool, such as histo-pathological image analysis, for cancer diagnosis and grading.

2020 ◽  
Vol 12 (18) ◽  
pp. 3015 ◽  
Author(s):  
Mélissande Machefer ◽  
François Lemarchand ◽  
Virginie Bonnefond ◽  
Alasdair Hitchins ◽  
Panagiotis Sidiropoulos

This work introduces a method that combines remote sensing and deep learning into a framework that is tailored for accurate, reliable and efficient counting and sizing of plants in aerial images. The investigated task focuses on two low-density crops, potato and lettuce. This double objective of counting and sizing is achieved through the detection and segmentation of individual plants by fine-tuning an existing deep learning architecture called Mask R-CNN. This paper includes a thorough discussion on the optimal parametrisation to adapt the Mask R-CNN architecture to this novel task. As we examine the correlation of the Mask R-CNN performance to the annotation volume and granularity (coarse or refined) of remotely sensed images of plants, we conclude that transfer learning can be effectively used to reduce the required amount of labelled data. Indeed, a previously trained Mask R-CNN on a low-density crop can improve performances after training on new crops. Once trained for a given crop, the Mask R-CNN solution is shown to outperform a manually-tuned computer vision algorithm. Model performances are assessed using intuitive metrics such as Mean Average Precision (mAP) from Intersection over Union (IoU) of the masks for individual plant segmentation and Multiple Object Tracking Accuracy (MOTA) for detection. The presented model reaches an mAP of 0.418 for potato plants and 0.660 for lettuces for the individual plant segmentation task. In detection, we obtain a MOTA of 0.781 for potato plants and 0.918 for lettuces.


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 954
Author(s):  
Loay Hassan ◽  
Mohamed Abdel-Nasser ◽  
Adel Saleh ◽  
Osama A. Omer ◽  
Domenec Puig

Existing nuclei segmentation methods have obtained limited results with multi-center and multi-organ whole-slide images (WSIs) due to the use of different stains, scanners, overlapping, clumped nuclei, and the ambiguous boundary between adjacent cell nuclei. In an attempt to address these problems, we propose an efficient stain-aware nuclei segmentation method based on deep learning for multi-center WSIs. Unlike all related works that exploit a single-stain template from the dataset to normalize WSIs, we propose an efficient algorithm to select a set of stain templates based on stain clustering. Individual deep learning models are trained based on each stain template, and then, an aggregation function based on the Choquet integral is employed to combine the segmentation masks of the individual models. With a challenging multi-center multi-organ WSIs dataset, the experimental results demonstrate that the proposed method outperforms the state-of-art nuclei segmentation methods with aggregated Jaccard index (AJI) and F1-scores of 73.23% and 89.32%, respectively, while achieving a lower number of parameters.


2019 ◽  
Vol 8 (2) ◽  
pp. 3401-3404

Cervical cancer is the symptomless disease to cause death amongst women due to cancer. Most of the cervical cancer diagnosis process microscopic images are taken as sample to identify Segmentation of cervical cells. In this paper, Fuzzy c-means clustering algorithm is used to preserve the colour and data loss during segmentation is minimal. It accurately segments the individual cytoplasm and nuclei from a cluster of overlapping cervical cells. Recent methods cannot undertake such absolute segmentation due to various challenges involved in delineating cells coping with overlap and poor contrast. Improved method for detecting overlapping cervical cells using advanced tests yields better results in detection. The cervical cancer can be prevented through both early detection and best treatment based on the acuteness of the disease


2021 ◽  
Author(s):  
Dejin Xun ◽  
Deheng Chen ◽  
Yitian Zhou ◽  
Volker M. Lauschke ◽  
Rui Wang ◽  
...  

Deep learning-based cell segmentation is increasingly utilized in cell biology and molecular pathology, due to massive accumulation of diverse large-scale datasets and excellent performance in cell representation. However, the development of specialized algorithms has long been hampered by a paucity of annotated training data, whereas the performance of generalist algorithm was limited without experiment-specific calibration. Here, we present a deep learning-based tool called Scellseg consisted of novel pre-trained network architecture and contrastive fine-tuning strategy. In comparison to four commonly used algorithms, Scellseg outperformed in average precision on three diverse datasets with no need for dataset-specific configuration. Interestingly, we found that eight images are sufficient for model tuning to achieve satisfied performance based on a shot data scale experiment. We also developed a graphical user interface integrated with functions of annotation, fine-tuning and inference, that allows biologists to easily specialize their own segmentation model and analyze data at the single-cell level.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Paulo Drews-Jr ◽  
Isadora de Souza ◽  
Igor P. Maurell ◽  
Eglen V. Protas ◽  
Silvia S. C. Botelho

AbstractImage segmentation is an important step in many computer vision and image processing algorithms. It is often adopted in tasks such as object detection, classification, and tracking. The segmentation of underwater images is a challenging problem as the water and particles present in the water scatter and absorb the light rays. These effects make the application of traditional segmentation methods cumbersome. Besides that, to use the state-of-the-art segmentation methods to face this problem, which are based on deep learning, an underwater image segmentation dataset must be proposed. So, in this paper, we develop a dataset of real underwater images, and some other combinations using simulated data, to allow the training of two of the best deep learning segmentation architectures, aiming to deal with segmentation of underwater images in the wild. In addition to models trained in these datasets, fine-tuning and image restoration strategies are explored too. To do a more meaningful evaluation, all the models are compared in the testing set of real underwater images. We show that methods obtain impressive results, mainly when trained with our real dataset, comparing with manually segmented ground truth, even using a relatively small number of labeled underwater training images.


Author(s):  
P. Salgado ◽  
T.-P. Azevedo Perdicoúlis

Medical image techniques are used to examine and determine the well-being of the foetus during pregnancy. Digital image processing (DIP) is essential to extract valuable information embedded in most biomedical signals. After, intelligent segmentation methods, based on classifier algorithms, must be applied to identify structures and relevant features from previous data. The success of both is essential for helping doctors to identify adverse health conditions from the medical images. To obtain easy and reliable DIP methods for foetus images in real-time, at different gestational ages, aware pre-processing needs to be applied to the images. Thence, some data features are extracted that are meant to be used as input to the segmentation algorithms presented in this work. Due to the high dimension of the problems in question, assemblage of the data is also desired. The segmentation of the images is done by revisiting the K-nn algorithm that is a conventional nonparametric classifier. Besides its simplicity, its power to accomplish high classification results in medical applications has been demonstrated. In this work two versions of this algorithm are presented (i) an enhancement of the standard version by aggregating the data apriori and (ii) an iterative version of the same method where the training set (TS) is not static. The procedure is demonstrated in two experiments, where two images of different technologies were selected: a magnetic resonance image and an ultrasound image, respectively. The results were assessed by comparison with the K-means clustering algorithm, a well-known and robust method for this type of task. Both described versions showed results close to 100% matching with the ones obtained by the validation method, although the iterative version displays much higher reliability in the classification.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1052
Author(s):  
Leang Sim Nguon ◽  
Kangwon Seo ◽  
Jung-Hyun Lim ◽  
Tae-Jun Song ◽  
Sung-Hyun Cho ◽  
...  

Mucinous cystic neoplasms (MCN) and serous cystic neoplasms (SCN) account for a large portion of solitary pancreatic cystic neoplasms (PCN). In this study we implemented a convolutional neural network (CNN) model using ResNet50 to differentiate between MCN and SCN. The training data were collected retrospectively from 59 MCN and 49 SCN patients from two different hospitals. Data augmentation was used to enhance the size and quality of training datasets. Fine-tuning training approaches were utilized by adopting the pre-trained model from transfer learning while training selected layers. Testing of the network was conducted by varying the endoscopic ultrasonography (EUS) image sizes and positions to evaluate the network performance for differentiation. The proposed network model achieved up to 82.75% accuracy and a 0.88 (95% CI: 0.817–0.930) area under curve (AUC) score. The performance of the implemented deep learning networks in decision-making using only EUS images is comparable to that of traditional manual decision-making using EUS images along with supporting clinical information. Gradient-weighted class activation mapping (Grad-CAM) confirmed that the network model learned the features from the cyst region accurately. This study proves the feasibility of diagnosing MCN and SCN using a deep learning network model. Further improvement using more datasets is needed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.


Sign in / Sign up

Export Citation Format

Share Document