scholarly journals Classifying T cell activity in autofluorescence intensity images with convolutional neural networks

2019 ◽  
Vol 13 (3) ◽  
Author(s):  
Zijie J. Wang ◽  
Alex J. Walsh ◽  
Melissa C. Skala ◽  
Anthony Gitter
2019 ◽  
Author(s):  
Zijie J. Wang ◽  
Alex J. Walsh ◽  
Melissa C. Skala ◽  
Anthony Gitter

ABSTRACTThe importance of T cells in immunotherapy has motivated developing technologies to better characterize T cells and improve therapeutic efficacy. One specific objective is assessing antigen-induced T cell activation because only functionally active T cells are capable of killing the desired targets. Autofluorescence imaging can distinguish T cell activity states of individual cells in a non-destructive manner by detecting endogenous changes in metabolic co-enzymes such as NAD(P)H. However, recognizing robust patterns of T cell activity is computationally challenging in the absence of exogenous labels or information-rich autofluorescence lifetime measurements. We demonstrate that advanced machine learning can accurately classify T cell activity from NAD(P)H intensity images and that those image-based signatures transfer across human donors. Using a dataset of 8,260 cropped single-cell images from six donors, we meticulously evaluate multiple machine learning models. These range from traditional models that represent images using summary statistics or extract image features with CellProfiler to deep convolutional neural networks (CNNs) pre-trained on general non-biological images. Adapting pre-trained CNNs for the T cell activity classification task provides substantially better performance than traditional models or a simple CNN trained with the autofluorescence images alone. Visualizing the images with dimension reduction provides intuition into why the CNNs achieve higher accuracy than other approaches. However, we observe that fine-tuning all layers of the pre-trained CNN does not provide a classification performance boost commensurate with the additional computational cost. Our software detailing our image processing and model training pipeline is available as Jupyter notebooks at https://github.com/gitter-lab/t-cell-classification.


2021 ◽  
Vol 156 ◽  
pp. S17
Author(s):  
Christina J Walker ◽  
Marion Jost ◽  
Sven-Ole Harder ◽  
Jan Schladetzky ◽  
Manuel Struder ◽  
...  

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Author(s):  
Edgar Medina ◽  
Roberto Campos ◽  
Jose Gabriel R. C. Gomes ◽  
Mariane R. Petraglia ◽  
Antonio Petraglia

Sign in / Sign up

Export Citation Format

Share Document