scholarly journals Physics-based machine learning for subcellular segmentation in living cells

Author(s):  
Arif Ahmed Sekh ◽  
Ida S. Opstad ◽  
Gustav Godtliebsen ◽  
Åsa Birna Birgisdottir ◽  
Balpreet Singh Ahluwalia ◽  
...  

AbstractSegmenting subcellular structures in living cells from fluorescence microscope images is a ground truth (GT)-deficient problem. The microscopes’ three-dimensional blurring function, finite optical resolution due to light diffraction, finite pixel resolution and the complex morphological manifestations of the structures all contribute to GT-hardness. Unsupervised segmentation approaches are quite inaccurate. Therefore, manual segmentation relying on heuristics and experience remains the preferred approach. However, this process is tedious, given the countless structures present inside a single cell, and generating analytics across a large population of cells or performing advanced artificial intelligence tasks such as tracking are greatly limited. Here we bring modelling and deep learning to a nexus for solving this GT-hard problem, improving both the accuracy and speed of subcellular segmentation. We introduce a simulation-supervision approach empowered by physics-based GT, which presents two advantages. First, the physics-based GT resolves the GT-hardness. Second, computational modelling of all the relevant physical aspects assists the deep learning models in learning to compensate, to a great extent, for the limitations of physics and the instrument. We show extensive results on the segmentation of small vesicles and mitochondria in diverse and independent living- and fixed-cell datasets. We demonstrate the adaptability of the approach across diverse microscopes through transfer learning, and illustrate biologically relevant applications of automated analytics and motion analysis.

2021 ◽  
Vol 15 ◽  
Author(s):  
Saba Momeni ◽  
Amir Fazlollahi ◽  
Leo Lebrat ◽  
Paul Yates ◽  
Christopher Rowe ◽  
...  

Cerebral microbleeds (CMB) are increasingly present with aging and can reveal vascular pathologies associated with neurodegeneration. Deep learning-based classifiers can detect and quantify CMB from MRI, such as susceptibility imaging, but are challenging to train because of the limited availability of ground truth and many confounding imaging features, such as vessels or infarcts. In this study, we present a novel generative adversarial network (GAN) that has been trained to generate three-dimensional lesions, conditioned by volume and location. This allows one to investigate CMB characteristics and create large training datasets for deep learning-based detectors. We demonstrate the benefit of this approach by achieving state-of-the-art CMB detection of real CMB using a convolutional neural network classifier trained on synthetic CMB. Moreover, we showed that our proposed 3D lesion GAN model can be applied on unseen dataset, with different MRI parameters and diseases, to generate synthetic lesions with high diversity and without needing laboriously marked ground truth.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kh Tohidul Islam ◽  
Sudanthi Wijewickrema ◽  
Stephen O’Leary

AbstractImage registration is a fundamental task in image analysis in which the transform that moves the coordinate system of one image to another is calculated. Registration of multi-modal medical images has important implications for clinical diagnosis, treatment planning, and image-guided surgery as it provides the means of bringing together complimentary information obtained from different image modalities. However, since different image modalities have different properties due to their different acquisition methods, it remains a challenging task to find a fast and accurate match between multi-modal images. Furthermore, due to reasons such as ethical issues and need for human expert intervention, it is difficult to collect a large database of labelled multi-modal medical images. In addition, manual input is required to determine the fixed and moving images as input to registration algorithms. In this paper, we address these issues and introduce a registration framework that (1) creates synthetic data to augment existing datasets, (2) generates ground truth data to be used in the training and testing of algorithms, (3) registers (using a combination of deep learning and conventional machine learning methods) multi-modal images in an accurate and fast manner, and (4) automatically classifies the image modality so that the process of registration can be fully automated. We validate the performance of the proposed framework on CT and MRI images of the head obtained from a publicly available registration database.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kriti Mahajan ◽  
Urvashi Garg ◽  
Mohammad Shabaz

The existing work on unsupervised segmentation frequently does not present any statistical extent to estimating and equating procedures, gratifying a qualitative calculation. Furthermore, regardless of the datum that enormous research is dedicated to the advancement of a novel segmentation approach and upgrading the deep learning techniques, there is an absence of research comprehending the assessment of eminent conventional segmentation methodologies for HSI. In this paper, to moderately fill this gap, we propose a direct method that diminishes the issues to some extent with the deep learning methods in the arena of a HSI space and evaluate the proposed segmentation techniques based on the method of the clustering-based profound iterating deep learning model for HSI segmentation termed as CPIDM. The proposed model is an unsupervised HSI clustering technique centered on the density of pixels in the spectral interplanetary space and the distance concerning the pixels. Furthermore, CPIDM is a fully convolutional neural network. In general, fully convolutional nets remain spatially invariant preventing them from modeling position-reliant outlines. The proposed network maneuvers this by encompassing an innovative position inclined convolutional stratum. The anticipated unique edifice of deep unsupervised segmentation deciphers the delinquency of oversegmentation and nonlinearity of data due to noise and outliers. The spectrum efficacy is erudite and incidental from united feedback via deep hierarchy with pooling and convolutional strata; as a consequence, it formulates an affiliation among class dissemination and spectra along with three-dimensional features. Moreover, the anticipated deep learning model has revealed that it is conceivable to expressively accelerate the segmentation process without substantive quality loss due to the existence of noise and outliers. The proposed CPIDM approach outperforms many state-of-the-art segmentation approaches that include watershed transform and neuro-fuzzy approach as validated by the experimental consequences.


2021 ◽  
Author(s):  
Tristan Meynier Georges ◽  
Maria Anna Rapsomaniki

Recent studies have revealed the importance of three-dimensional (3D) chromatin structure in the regulation of vital biological processes. Contrary to protein folding, no experimental procedure that can directly determine ground-truth 3D chromatin coordinates exists. Instead, chromatin conformation is studied implicitly using high-throughput chromosome conformation capture (Hi-C) methods that quantify the frequency of all pairwise chromatin contacts. Computational methods that infer the 3D chromatin structure from Hi-C data are thus unsupervised, and limited by the assumption that contact frequency determines Euclidean distance. Inspired by recent developments in deep learning, in this work we explore the idea of transfer learning to address the crucial lack of ground-truth data for 3D chromatin structure inference. We present a novel method, Transfer learning Encoder for CHromatin 3D structure prediction (TECH-3D) that combines transfer learning with creative data generation procedures to reconstruct chromatin structure. Our work outperforms previous deep learning attempts for chromatin structure inference and exhibits similar results as state-of-the-art algorithms on many tests, without making any assumptions on the relationship between contact frequencies and Euclidean distances. Above all, TECH-3D presents a highly creative and novel approach, paving the way for future deep learning models.


2021 ◽  
Vol 3 (Supplement_3) ◽  
pp. iii20-iii20
Author(s):  
Jen-Yeu Wang ◽  
Navjot Sandhu ◽  
Maria Mendoza ◽  
Jhih-Yuan Lin ◽  
Yueh-Hung Cheng ◽  
...  

Abstract Introduction Artificial intelligence-based tools can significantly impact detection and segmentation of brain metastases for stereotactic radiosurgery (SRS). VBrain is a deep learning algorithm, recently FDA-cleared, to assist in brain tumor contouring. In this study, we aimed to further validate this tool in patients treated with SRS for brain metastases at Stanford Cancer Center. Methods We included randomly selected patients with brain metastases treated with SRS from 2008 to 2020. Computed tomography (CT) and axial T1-weighted post-contrast magnetic resonance (MR) image data were extracted for each patient and uploaded to VBrain. Subsequent analyses compared the output contours from VBrain with the physician-defined contours used for SRS. A brain metastasis was considered “detected” when the VBrain “predicted” contours overlapped with the corresponding physician contours (“ground-truth” contours). We evaluated performance against ground-truth contours using the following metrics: lesion-wise Dice similarity coefficient (DSC), lesion-wise average Hausdorff distance (AVD), false positive count (FP), and lesion-wise sensitivity (%). Results We analyzed 60 patients with 321 intact brain metastases treated over 70 SRS courses. Resection cavities were excluded from the analysis. The median (range) tumor size was 132 mm3 (7 to 24,765). Input CT scan slice thickness was 1.250 mm, and median (range) pixel resolution was 0.547 mm (0.457 to 0.977). Input MR scan median (range) slice thickness was 1.000 mm (0.940 to 2.000), and median (range) pixel resolution was 0.469 mm (0.469 to 1.094). In assessing VBrain performance, we found mean lesion-wise DSC to be 0.70, mean lesion-wise AVD to be 9.40% of lesion size (0.805 mm), mean FP to be 0.657 tumors per case, and lesion-wise sensitivity to be 84.5%. Conclusion Retrospective analysis of our brain metastases cohort using a deep learning algorithm yielded promising results. Integration of VBrain into the clinical workflow can provide further clinical and research insights.


2019 ◽  
Vol 46 (7) ◽  
pp. 3180-3193 ◽  
Author(s):  
Ran Zhou ◽  
Aaron Fenster ◽  
Yujiao Xia ◽  
J. David Spence ◽  
Mingyue Ding

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1952
Author(s):  
May Phu Paing ◽  
Supan Tungjitkusolmun ◽  
Toan Huy Bui ◽  
Sarinporn Visitsattapongse ◽  
Chuchart Pintavirooj

Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.


Nanoscale ◽  
2021 ◽  
Author(s):  
Hui Hu ◽  
Fu Zhou ◽  
Baojuan Wang ◽  
Xin Chang ◽  
Tianyue Dai ◽  
...  

Three dimensional (3D) DNA walkers possesses the potential as ideal candidates for signal transduction and amplification in bioassays. However, intracellularly autonomous operation of 3D DNA walkers is still limitedly implemented...


Sign in / Sign up

Export Citation Format

Share Document