scholarly journals DeLTA: Automated cell segmentation, tracking, and lineage reconstruction using deep learning

2019 ◽  
Author(s):  
Jean-Baptiste Lugagne ◽  
Haonan Lin ◽  
Mary J. Dunlop

AbstractMicroscopy image analysis is a major bottleneck in quantification of single-cell microscopy data, typically requiring human supervision and curation, which limit both accuracy and throughput. To address this, we developed a deep learning-based image analysis pipeline that performs segmentation, tracking, and lineage reconstruction. Our analysis focuses on time-lapse movies of Escherichia coli cells trapped in a “mother machine” microfluidic device, a scalable platform for long-term single-cell analysis that is widely used in the field. While deep learning has been applied to cell segmentation problems before, our approach is fundamentally innovative in that it also uses machine learning to perform cell tracking and lineage reconstruction. With this framework we are able to get high fidelity results (1% error rate), without human supervision. Further, the algorithm is fast, with complete analysis of a typical frame containing ∼150 cells taking <700msec. The framework is not constrained to a particular experimental set up and has the potential to generalize to time-lapse images of other organisms or different experimental configurations. These advances open the door to a myriad of applications including real-time tracking of gene expression and high throughput analysis of strain libraries at single-cell resolution.Author SummaryAutomated microscopy experiments can generate massive data sets, allowing for detailed analysis of cell physiology and properties such as gene expression. In particular, dynamic measurements of gene expression with time-lapse microscopy have proved invaluable for understanding how gene regulatory networks operate. However, image analysis remains a key bottleneck in the analysis pipeline, typically requiring human supervision and a posteriori processing. Recently, machine learning-based approaches have ushered in a new era of rapid, unsupervised image analysis. In this work, we use and repurpose the U-Net deep learning algorithm to develop an image processing pipeline that can not only accurately identify the location of cells in an image, but also track them over time as they grow and divide. As an application, we focus on multi-hour time-lapse movies of bacteria growing in a microfluidic device. Our algorithm is accurate and fast, with error rates near 1% and requiring less than a second to analyze a typical movie frame. This increase in speed and fidelity has the potential to open new experimental avenues, e.g. where images are analyzed on-the-fly so that experimental conditions can be updated in real time.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4736
Author(s):  
Sk. Tanzir Mehedi ◽  
Adnan Anwar ◽  
Ziaur Rahman ◽  
Kawsar Ahmed

The Controller Area Network (CAN) bus works as an important protocol in the real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust architecture. The risk of IVN devices has still been insecure and vulnerable due to the complex data-intensive architectures which greatly increase the accessibility to unauthorized networks and the possibility of various types of cyberattacks. Therefore, the detection of cyberattacks in IVN devices has become a growing interest. With the rapid development of IVNs and evolving threat types, the traditional machine learning-based IDS has to update to cope with the security requirements of the current environment. Nowadays, the progression of deep learning, deep transfer learning, and its impactful outcome in several areas has guided as an effective solution for network intrusion detection. This manuscript proposes a deep transfer learning-based IDS model for IVN along with improved performance in comparison to several other existing models. The unique contributions include effective attribute selection which is best suited to identify malicious CAN messages and accurately detect the normal and abnormal activities, designing a deep transfer learning-based LeNet model, and evaluating considering real-world data. To this end, an extensive experimental performance evaluation has been conducted. The architecture along with empirical analyses shows that the proposed IDS greatly improves the detection accuracy over the mainstream machine learning, deep learning, and benchmark deep transfer learning models and has demonstrated better performance for real-time IVN security.



Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 689
Author(s):  
Tom Springer ◽  
Elia Eiroa-Lledo ◽  
Elizabeth Stevens ◽  
Erik Linstead

As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems.



2020 ◽  
Author(s):  
Nadia M. V. Sampaio ◽  
Caroline M. Blassick ◽  
Jean-Baptiste Lugagne ◽  
Mary J. Dunlop

AbstractCell-to-cell heterogeneity in gene expression and growth can have critical functional consequences, such as determining whether individual bacteria survive or die following stress. Although phenotypic variability is well documented, the dynamics that underlie it are often unknown. This information is critical because dramatically different outcomes can arise from gradual versus rapid changes in expression and growth. Using single-cell time-lapse microscopy, we measured the temporal expression of a suite of stress response reporters in Escherichia coli, while simultaneously monitoring growth rate. In conditions without stress, we found widespread examples of pulsatile expression. Single-cell growth rates were often anti-correlated with gene expression, with changes in growth preceding changes in expression. These pulsatile dynamics have functional consequences, which we demonstrate by measuring survival after challenging cells with the antibiotic ciprofloxacin. Our results suggest that pulsatile expression and growth dynamics are common in stress response networks and can have direct consequences for survival.



Author(s):  
Kenneth H. Hu ◽  
John P. Eichorst ◽  
Chris S. McGinnis ◽  
David M. Patterson ◽  
Eric D. Chow ◽  
...  

ABSTRACTSpatial transcriptomics seeks to integrate single-cell transcriptomic data within the 3-dimensional space of multicellular biology. Current methods use glass substrates pre-seeded with matrices of barcodes or fluorescence hybridization of a limited number of probes. We developed an alternative approach, called ‘ZipSeq’, that uses patterned illumination and photocaged oligonucleotides to serially print barcodes (Zipcodes) onto live cells within intact tissues, in real-time and with on-the-fly selection of patterns. Using ZipSeq, we mapped gene expression in three settings: in-vitro wound healing, live lymph node sections and in a live tumor microenvironment (TME). In all cases, we discovered new gene expression patterns associated with histological structures. In the TME, this demonstrated a trajectory of myeloid and T cell differentiation, from periphery inward. A variation of ZipSeq efficiently scales to the level of single cells, providing a pathway for complete mapping of live tissues, subsequent to real-time imaging or perturbation.



2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Primož Godec ◽  
Matjaž Pančur ◽  
Nejc Ilenič ◽  
Andrej Čopar ◽  
Martin Stražar ◽  
...  

Abstract Analysis of biomedical images requires computational expertize that are uncommon among biomedical scientists. Deep learning approaches for image analysis provide an opportunity to develop user-friendly tools for exploratory data analysis. Here, we use the visual programming toolbox Orange (http://orange.biolab.si) to simplify image analysis by integrating deep-learning embedding, machine learning procedures, and data visualization. Orange supports the construction of data analysis workflows by assembling components for data preprocessing, visualization, and modeling. We equipped Orange with components that use pre-trained deep convolutional networks to profile images with vectors of features. These vectors are used in image clustering and classification in a framework that enables mining of image sets for both novel and experienced users. We demonstrate the utility of the tool in image analysis of progenitor cells in mouse bone healing, identification of developmental competence in mouse oocytes, subcellular protein localization in yeast, and developmental morphology of social amoebae.





Energies ◽  
2020 ◽  
Vol 13 (15) ◽  
pp. 3930 ◽  
Author(s):  
Ayaz Hussain ◽  
Umar Draz ◽  
Tariq Ali ◽  
Saman Tariq ◽  
Muhammad Irfan ◽  
...  

Increasing waste generation has become a significant issue over the globe due to the rapid increase in urbanization and industrialization. In the literature, many issues that have a direct impact on the increase of waste and the improper disposal of waste have been investigated. Most of the existing work in the literature has focused on providing a cost-efficient solution for the monitoring of garbage collection system using the Internet of Things (IoT). Though an IoT-based solution provides the real-time monitoring of a garbage collection system, it is limited to control the spreading of overspill and bad odor blowout gasses. The poor and inadequate disposal of waste produces toxic gases, and radiation in the environment has adverse effects on human health, the greenhouse system, and global warming. While considering the importance of air pollutants, it is imperative to monitor and forecast the concentration of air pollutants in addition to the management of the waste. In this paper, we present and IoT-based smart bin using a machine and deep learning model to manage the disposal of garbage and to forecast the air pollutant present in the surrounding bin environment. The smart bin is connected to an IoT-based server, the Google Cloud Server (GCP), which performs the computation necessary for predicting the status of the bin and for forecasting air quality based on real-time data. We experimented with a traditional model (k-nearest neighbors algorithm (k-NN) and logistic reg) and a non-traditional (long short term memory (LSTM) network-based deep learning) algorithm for the creation of alert messages regarding bin status and forecasting the amount of air pollutant carbon monoxide (CO) present in the air at a specific instance. The recalls of logistic regression and k-NN algorithm is 79% and 83%, respectively, in a real-time testing environment for predicting the status of the bin. The accuracy of modified LSTM and simple LSTM models is 90% and 88%, respectively, to predict the future concentration of gases present in the air. The system resulted in a delay of 4 s in the creation and transmission of the alert message to a sanitary worker. The system provided the real-time monitoring of garbage levels along with notifications from the alert mechanism. The proposed works provide improved accuracy by utilizing machine learning as compared to existing solutions based on simple approaches.



Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2556
Author(s):  
Liyang Wang ◽  
Yao Mu ◽  
Jing Zhao ◽  
Xiaoya Wang ◽  
Huilian Che

The clinical symptoms of prediabetes are mild and easy to overlook, but prediabetes may develop into diabetes if early intervention is not performed. In this study, a deep learning model—referred to as IGRNet—is developed to effectively detect and diagnose prediabetes in a non-invasive, real-time manner using a 12-lead electrocardiogram (ECG) lasting 5 s. After searching for an appropriate activation function, we compared two mainstream deep neural networks (AlexNet and GoogLeNet) and three traditional machine learning algorithms to verify the superiority of our method. The diagnostic accuracy of IGRNet is 0.781, and the area under the receiver operating characteristic curve (AUC) is 0.777 after testing on the independent test set including mixed group. Furthermore, the accuracy and AUC are 0.856 and 0.825, respectively, in the normal-weight-range test set. The experimental results indicate that IGRNet diagnoses prediabetes with high accuracy using ECGs, outperforming existing other machine learning methods; this suggests its potential for application in clinical practice as a non-invasive, prediabetes diagnosis technology.



2016 ◽  
Vol 21 (9) ◽  
pp. 998-1003 ◽  
Author(s):  
Oliver Dürr ◽  
Beate Sick

Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening–based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.



Sign in / Sign up

Export Citation Format

Share Document